![]() temporal prediction of modified adaptive loop filter to support time scalability
专利摘要:
a video encoder can reconstruct a current image of video data. a current region of the current image is associated with a time index that indicates a temporal layer to which the current region belongs. in addition, for each respective set of a plurality of sets corresponding to different time layers, the video encoder can store, in the respective arrangement, sets of circuit adaptive filtering parameters (alf) used in the application of alf filters to samples of image regions of the video data that are decoded before application and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set. the video encoder determines, based on a selected set of parameters alf in the set corresponding to the temporal layer to which the current region belongs, applicable set of parameters alf. 公开号:BR112019013705A2 申请号:R112019013705 申请日:2018-01-04 公开日:2020-04-28 发明作者:Zhang Li;Karczewicz Marta;Chien Wei-Jung;Wang Ye-Kui 申请人:Qualcomm Inc; IPC主号:
专利说明:
TEMPORAL PREDICTION OF MODIFIED ADAPTIVE TIE FILTER FOR TEMPORAL SCALABILITY SUPPORT [001] This order claims the benefit of US Provisional Order No. 62 / 442,322, filed on January 4, 2017, and US Provisional Order No. 62 / 445,174, filed on January 11, 2017, the total content of each of which is incorporated herein by reference. TECHNICAL FIELD [002] This description refers to video encoding and decoding. FUNDAMENTALS [003] Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, desktop computers, e-book readers, digital cameras, digital recording devices, digital media players, video game devices, video game consoles, cell phones or satellite radio, so-called smart phones, video teleconferencing devices, broadcast devices continuous video and the like. Digital video devices that implement video compression techniques, such as those described in the standards defined by MPEG -2, MPEG-4, ITU-T H.263, ITU-T H.264 / MPEG-4, part 10, encoding Advanced Video (AVC), ITU-T H.265 standard, High-efficiency Video encoding standard (HEVC), and extensions of such standards. Video devices can transmit, receive, encode, decode, and / or store Petition 870190061606, of 7/2/2019, p. 7/147 2/109 digital video information more efficiently by implementing such video compression techniques. [004] Video compression techniques can perform spatial (intra-image) prediction and / or temporal (inter-image) prediction to reduce or remove redundancy inherent in video sequences. For block-based video encoding, a video slice (for example, a video frame or a portion of a video frame) can be divided into video blocks, such as encoding tree blocks and encoding blocks. Spatial or temporal prediction results in a predictive block for a block to be coded. Residual data represents pixel differences between the original block to be encoded and the predictive block. For greater compression, residual data can be transformed from the pixel domain to a transform domain, resulting in residual transform coefficients, which can then be quantified. SUMMARY [005] In general, this description describes techniques related to adaptive loop filtering (ALF), especially for predicting ALF filters from previously coded frames, slices or mosaics. The techniques can be used in the context of advanced video codecs, such as HEVC extensions or the next generation of video encoding standards. [006] In one example, this description describes a method of decoding video data, the method comprising: receiving a bit stream that includes a coded representation of a current image of the video data Petition 870190061606, of 7/2/2019, p. 8/147 3/109 video, in which a current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs; reconstructing the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of adaptive circuit filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before the current region and either in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determination, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region; and application, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. [007] In another example, this description describes a method of encoding video data, the method comprising: generation of a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs; reconstructing the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, of sets of adaptive filtering parameters of Petition 870190061606, of 7/2/2019, p. 9/147 4/109 circuit (ALF) used in applying ALF filters to samples of image regions of the video data that are decoded before the current region and that in the corresponding temporal layer or a lower temporal layer than the temporal layer corresponding to the respective set; determination, based on a selected set of ALF parameters in one of the arrangements corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region; application, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region; and after applying the adaptive mesh filtering to the current region, using the current region to predict a subsequent image of the video data [008] In another example, this description describes a device for decoding video data, the device comprising : one or more storage media configured to store the video data; and one or more processors configured to: receive a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a time index that indicates a time layer to which it belongs the current region; reconstruct the current image for each respective set of a plurality of sets that correspond to different time layers, storing, in the respective set, sets of adaptive circuit filtering parameters (ALF) used in the application of ALF filters to samples of image regions of the data from Petition 870190061606, of 7/2/2019, p. 10/147 5/109 video that are decoded before the current region and that in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region; and applying, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. [009] In another example, this description describes a device for encoding video data, the device comprising: one or more storage means configured to store the video data; and one or more processors configured for: generates a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a time index that indicates a time layer to which it belongs the current region; reconstruct the current image for each respective set of a plurality of sets that correspond to different time layers, storing, in the respective set, sets of adaptive circuit filtering parameters (ALF) used in the application of ALF filters to samples of image regions of the video data that is decoded before the current region and that in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in one of the arrays Petition 870190061606, of 7/2/2019, p. 11/147 6/109 corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region; application, based on the applicable set of ALF parameters for the current region, the loop's adaptive filtering for the current region; and after applying adaptive loop filtering to the current region, using the current region to predict a subsequent image of the video data. [0010] In another example, this description describes a device for decoding video data, the device comprising: device for receiving a bit stream that includes a coded representation of a current image of the video data, in which a region of current the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs; means for reconstructing the current image for each respective set of a plurality of sets corresponding to different time layers, means for storage, in the respective set, sets of adaptive circuit filtering parameters (ALF) used in the application of ALF filters to samples of image regions of the video data that are decoded before the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; device to determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region; and means for application, based on the set Petition 870190061606, of 7/2/2019, p. 12/147 7/109 applicable ALF parameters for the current region, adaptive loop filtering for the current region. [0011] In another example, this description describes a device for encoding video data, the device comprising: means for generating a bit stream that includes a coded representation of a current image of the video data, wherein a current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs; means to reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, means for storage, in the respective set, sets of circuit adaptive filtering parameters (ALF) used in the application of ALF filters to samples of image regions of the data of video that are decoded before the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; device to determine, based on a selected set of ALF parameters in one of the dispositions corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region; means for application, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region; and means for using, after applying adaptive loop filtering to the current region, the current region for prediction of a subsequent image of the video data. [0012] In another example, this description Petition 870190061606, of 7/2/2019, p. 13/147 8/109 describes a computer-readable data storage medium that stores instructions that, when executed, cause one or more processors to: receive from a bit stream that includes an encoded representation of a current image of video data, where a current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs; reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of adaptive circuit filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before processing the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, an applicable set of ALF parameters for the current region; and application, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. [0013] In another example, this description describes a computer-readable storage medium that stores instructions that, when executed, cause one or more processors to generate a bit stream that includes a coded representation of a current image of the video data , where a current region of the current image is associated with a time index that indicates a layer Petition 870190061606, of 7/2/2019, p. 14/147 9/109 temporal to which the current region belongs; reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of circuit adaptive filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before the current region and either in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in one of the arrangements corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region; application, based on the applicable set of ALF parameters for the current region, the loop's adaptive filtering for the current region; and after applying adaptive loop filtering to the current region, using the current region to predict a subsequent image of the video data. [0014] Details of one or more aspects of the description are set out in the attached drawings and in the description below. Other characteristics, objectives and advantages of the techniques described in this description will be evident from the description, drawings and claims. BRIEF DESCRIPTION OF THE DRAWINGS [0015] Figure 1 is a block diagram that illustrates an exemplary video encoding and decoding system that can use one or more techniques described in this description. Petition 870190061606, of 7/2/2019, p. 15/147 10/109 [0016] THE Figure 2 illustrates three supports in filter Filtering of Loop Adaptive (ALF) of example many different • [0017] THE Figure 3 illustrates one example in setting of Random Access with Group of Images (GOP) equal to 16. [0018] Figure 4A illustrates an arrangement for storing filter parameters. [0019] Figure 4B illustrates a different state of the arrangement for storing filter parameters. [0020] Figure 5 illustrates a plurality of dispositions corresponding to different temporal layers, according to a first technique of this description. [0021] Figure 6 illustrates an arrangement for storing ALF parameters and associated temporal layer index values, according to a second technique of this description. [0022] Figure 7 is a block diagram illustrating an exemplary video encoder that can implement one or more techniques described in this description. [0023] Figure 8 is a block diagram that illustrates an example of a video decoder that can implement one or more techniques described in this report. [0024] Figure 9 is a flow chart illustrating an exemplary operation of a video encoder, according to a first technique of this description. [0025] Figure 10 is a flow chart illustrating an exemplary operation of a video decoder, according to the first technique of this description. [0026] Figure 1 is a flow chart that illustrates Petition 870190061606, of 7/2/2019, p. 16/147 11/109 an exemplary operation of a video encoder, according to a second technique of this description. [0027] Figure 12 is a flow chart that illustrates an exemplary operation of a video decoder, according to the second technique of this description. DETAILED DESCRIPTION [0028] Adaptive mesh filtering (ALF) is a process that applies one or more adaptive filters (ie, ALF filters) as part of a coding loop to improve the quality of decoded video data. An ALF filter is associated with a set of coefficients. A video encoder (ie, a video encoder or a video decoder) can apply ALF filters with different coefficients to different blocks of the same image, based on the characteristics of the blocks. To reduce the overhead associated with signaling the coefficients associated with the ALF filters, a video encoder can store, together, sets of ALF parameters for ALF filters used in previously coded images, mosaics or slices. A set of ALF parameters can include multiple coefficients associated with one or more ALF Filters. For example, an ALF parameter set can indicate coefficients associated with multiple filters. The video encoder replaces the ALF parameter sets in the set according to a First-in First-Out (FIFO) basis. [0029] Different images in a video sequence can belong to different time layers. Different time layers are associated with different time identifiers. An image on a given layer Petition 870190061606, of 7/2/2019, p. 17/147 12/109 temporal can be decoded with reference to other images having the temporal identifier of the given temporal layer and images having temporal identifiers with values less than the values of the temporal identifier of the given temporal layer. [0030] Because a video encoder stores filter data (for example, ALF parameter sets) in the set according to a FIFO basis, the set can contain filter data from an image having a time identifier greater than one temporal identifier of an image being decoded. This could potentially cause errors in the filtering process because this can make the current image dependent on an image in a temporal layer with a higher temporal identifier than the current image's temporal layer if the image with the highest temporal identifier is lost or not needed to be decoded. [0031] This description describes techniques that can solve this problem. In one example, a video encoder can store, in a plurality of arrangements, sets of ALF parameters used in applying one or more ALF Filters to sample image regions of the video data encoded before the current image. Each respective set of the plurality of sets corresponds to a respective different time layer. In addition, the video encoder can determine, based on a selected set of ALF parameters in the set corresponding to a time layer to which a current region belongs, applicable set of ALF parameters for the Petition 870190061606, of 7/2/2019, p. 18/147 13/109 current region. This description can use the term region to refer to a slice or other type of area in a current image to perform ALF. The video encoder can apply, based on the applicable set of ALF parameters for the current region, an ALF filter for the current region. [0032] In some examples, a video encoder stores, in an array, sets of ALF parameters used in applying one or more ALF Filters to image samples of the decoded video data before the current image. Additionally, in this example, the video encoder stores, in the array, the time layer indices associated with the ALF parameter sets. A time layer index associated with a set of ALF parameters indicates a time layer of the region in which the set of ALF parameters was used to apply an ALF Filter. In this example, the video encoder can determine, based on a selected set of ALF parameters in the array whose associated time layer index indicates the time layer to which the current region belongs, the applicable set of ALF parameters for the current region. In addition, in this example, the video encoder can apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. [0033] In any of these examples, the association of ALF parameters with temporal layers can help to avoid the problem of a current image being potentially dependent on the decoding of an image in a higher temporal layer. Petition 870190061606, of 7/2/2019, p. 19/147 14/109 [0034] Figure 1 is a block diagram illustrating an exemplary video encoding and decoding system 10 that can use techniques of this description. As shown in Figure 1, system 10 includes a source device 12 that provides encoded video data to be decoded at a later time by a destination device 14. In particular, source device 12 provides encoded video data to the device target 14 via a computer readable medium 16. Source device 12 and target device 14 can comprise any of a wide range of devices, including desktop computers, notebook computers (ie laptop) computers, desktop computers, boxes tablet devices, telephone devices such as so-called smart phones, desktop computers, televisions, cameras, display devices, digital media players, video game consoles, video transmission devices, or the like. In some cases, the source device 12 and the destination device 14 are equipped for wireless communication. Thus, the source device 12 and the destination device 14 can be wireless communication devices. The techniques described in this exhibition can be applied to wireless and / or wired applications. The source device 12 is an exemplary video encoding device (i.e., a device for encoding video data). The target device 14 is an exemplary video decoding device (i.e., a device for decoding video data). [0035] The illustrated system 10 of Figure 1 is merely an example. Techniques for data processing Petition 870190061606, of 7/2/2019, p. 20/147 15/109 video can be performed by any digital video encoding and / or decoding device. In some examples, the techniques can be performed by a video encoder, typically referred to as a CODEC. The source device 12 and the target device 14 are examples of such encoding devices in which the source device 12 generates encoded video data for transmission to the target device 14. In some examples, the source device 12 and the target device 14 operate in a substantially symmetrical manner. Such a system in which each of the source devices 12 and destination device 14 includes video encoding and decoding components. Therefore, system 10 can support one-way or two-way video transmission between source device 12 and destination device 14, for example, for video streaming, video playback, video broadcasting, or video telephony. [0036] In the example of Figure 1, the source device 12 includes a video source 18, storage medium 19 configured to store video data, a video encoder 20 and an output interface 22. The Destination device 14 includes a input interface 26, storage means 28 configured to store encoded video data, a video decoder 30, and display device 32. In other examples, source device 12 and destination device 14 include other components or arrangements. For example, source device 12 can receive video data from an external video source, such as an external camera. Petition 870190061606, of 7/2/2019, p. 21/147 10/169 Likewise, the target device 14 can interface with an external display device, instead of including an integrated display device. [0037] Video source 18 is a source of video data. The video data can comprise a series of images. Video source 18 may include a video capture device, such as a video camera, a video file containing previously captured video, and / or a video feed interface for receiving video data from a video provider. video content. In some instances, video source 18 generates video data based on computer graphics, or a combination of live video, archived video and computer generated video. Storage media 19 can be configured to store video data. In each case, the captured, pre-captured or computer generated video can be encoded by the video encoder 20 [0038] The output interface 22 can output the encoded video information to a computer readable medium 16. The Output interface 22 can comprise various types of components or devices. For example, output interface 22 may comprise a wireless transmitter, a modem, a wired network component (for example, an Ethernet card), or another physical component. In examples where output interface 22 comprises a wireless transmitter, output interface 22 can be configured to transmit data, such as encoded video data, modulated according to a cellular communication standard, such as 4G, 4G-LTE , Advanced LTE, 5G and the like. In some examples, where the interface of Petition 870190061606, of 7/2/2019, p. 22/147 17/109 output 22 comprises a wireless transmitter, output interface 22 can be configured to transmit data, such as encoded video data, modulated according to other wireless standards, such as the IEEE 802. 11 specification, a specification IEEE 802.15 (for example, ZigBee ™), a Bluetooth ™ standard, and the like. Thus, in some examples, the source device 12 comprises a wireless communication device that includes a transmitter configured to transmit encoded video data. In some of these examples, the wireless communication device comprises a telephone set and the transmitter is configured to modulate, according to a wireless communication standard, a signal comprising the encoded video data. [0039] In some examples, the circuitry of the output interface 22 is integrated with the video encoder circuit 20 and / or other components of the source device 12. For example, the video encoder 20 and the output interface 22 they can be parts of a system on a chip (SC). Soe can also include other components, such as a general purpose microprocessor, a graphics processing unit, and so on. [0040] Target device 14 can receive encoded video data to be decoded by computer-readable medium 16. Computer-readable medium 16 can comprise any type of medium or device capable of moving encoded video data from the source device 12 to the target device 14. In some instances, the computer-readable medium 16 Petition 870190061606, of 7/2/2019, p. 23/147 18/109 comprises a communication means to allow the source device 12 to transmit encoded video data directly to the destination device 14 in real time. The communication means can comprise any wireless or wired communication means, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium can form part of a packet-based network, such as a local area network, wide area network, or global network such as the Internet. The communication means may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from the source device 20T 12 to the destination device 14. The Destination device 14 may comprise one or more means of data storage configured to store encoded video data and decoded video data. [0041] In some examples, output interface 22 may output data, such as encoded video data, to an intermediate device, such as a storage device. Similarly, the input interface 26 of the target device 14 can receive encrypted data from the intermediate device. The intermediate device can include any of a variety of data storage media distributed or accessed locally, such as a hard disk, Blu-ray discs, DVDs, CD-ROMs, Instant memory, volatile or non-volatile memory, or any other medium digital storage suitable for storing encoded video data. In some examples, the middle device corresponds to a file server. File servers Petition 870190061606, of 7/2/2019, p. 24/147 19/109 examples include network servers, FTP servers, network-attached storage devices (NAS), or local disk drives. [0042] Target device 14 can access encoded video data through any standard data connection, including an Internet connection. This can include a wireless channel (for example, a WiFi connection), a wired connection (for example, DSL, cable modem, etc.) or a combination of both that is suitable for accessing encoded video data stored on a server of files. The transmission of encoded video data from the storage device can be a continuous transmission, a transfer transmission, or a combination thereof. [0043] Computer readable medium 16 may include transient media, such as a wireless transmission or a wired network transmission, or storage media (i.e., non-transient storage media) such as a hard disk, compact disk, disk compact disc, digital video disc, Blu-ray disc, or other computer-readable medium. In some examples, a network server (not shown) can receive encoded video data from source device 12 and provide encoded video data to destination device 14, for example, via network transmission. The computing device of a medium production installation, such as a disk stamping installation, can receive encoded video data from source device 12 and produce a disc containing encoded video data. Therefore, the computer-readable medium 16 Petition 870190061606, of 7/2/2019, p. 25/147 20/109 can be understood as including one or more computer-readable media in various ways, in several examples. [0044] Input interface 26 of the target device 14 receives data from computer-readable medium 16. The Input interface 26 can comprise various types of components or devices. For example, input interface 26 may comprise a wireless receiver, a modem, a wired network component (for example, an Ethernet card), or another physical component. In examples where the input interface 26 comprises a wireless receiver, the input interface 26 can be configured to receive data, such as the bit stream, modulated according to a cellular communication standard, such as 4G, 4G-LTE , Advanced LTE, 5G and the like. In some examples, where input interface 26 comprises a wireless receiver, input interface 26 can be configured to receive data, such as bitstream, modulated according to other wireless standards, such as the IEEE 802.11 specification, an IEEE 802.15 specification (for example, ZigBee ™), a Bluetooth ™ standard, and the like. Thus, in some examples, the destination device 14 may comprise a wireless communication device comprising a receiver configured to receive encoded video data. In some of these examples, the wireless communication device comprises a telephone device and the receiver is configured to demodulate, according to a wireless communication standard, a signal comprising the encoded video data. In some examples, the source device 12 may comprise a transmitter and Petition 870190061606, of 7/2/2019, p. 26/147 The target device 14 may comprise a transmitter and a receiver. [0045] In some examples, the circuitry of the input interface 26 can be integrated with the circuitry of the video decoder 30 and / or other components of the target device 14. For example, the video decoder 30 and the input interface 26 can be part of a Soe, such as a microprocessor in general purpose, one unit in graphic processing, and so on. [0046] 0 storage medium 2 8 can to be configured for warehouse enar video data coded, such as encoded video data (e.g., a bit stream) received by the input interface 26. The display device 32 displays the decoded video data for a user. The display device 32 may comprise any of a variety of display devices, such as a liquid crystal display (LCD), a plasma display, an organic light-emitting diode (OLED) display, or other type of device display. [0047] Video encoder 20 and video decoder 30, each can be implemented as any of a variety of suitable programmable and / or fixed function circuits, such as one or more microprocessors, digital signal processors (DSPs), circuits application-specific integrated systems (ASICs), field programmable port arrangements (FPGAs)), discrete logic, software, hardware, firmware or any combination thereof. When techniques are partially implemented in software, a device can store Petition 870190061606, of 7/2/2019, p. 27/147 22/109 instructions for the software in an appropriate manner, readable by a non-transitory computer and can execute instructions in hardware using one or more processors to perform the techniques of this description. Each of the video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of a combined encoder / decoder (CODEC) in a respective device. [0048] In some examples, video encoder 20 and video decoder 30 encode and decode video data according to a video encoding standard or specification. For example, video encoder 20 and video decoder 30 can encode and decode video data according to ITU-T H.261, Visual ISO / IEC MPEG -1, ITU-T H.262 or ISO / IEC MPEG -2 Visual, ITU-T H.263, ISO / IEC MPEG -4 Visual and ITU-T H 264 (also known as ISO / IEC MPEG -4 AVC), including its scalable video encoding (SVC) and encoding extensions Multi-View (MVC) video, or other video encoding standard or specification. In some examples, video encoder 20 and video decoder 30 encode and decode video data according to High Efficiency video encoding (HEVC), which is known as ITU-T H 265, Video encoding extensions screen and screen content, its 3D video encoding extension (3D-HEVC), its multiview extension (MV-HEVC), or its scalable extension (SHVC). In some examples, video encoder 20 and video decoder 30 operate in accordance with other standards, including standards Petition 870190061606, of 7/2/2019, p. 28/147 10/23 currently under development. [0049] In HEVC and other video encoding specifications, video data includes a series of images. Images can also be referred to as frames. An image can include one or more sample sets. Each set of respective samples in an image can comprise a set of samples for a respective color component. An image can include three sets of samples, denoted S t , S C be S Cr · S L is a two-dimensional set (that is, a block) of luma samples. S C b is a two-dimensional set of Cb chroma samples. S Cr is a system of two dimensional sets of Cr chroma samples. In other cases, an image may be monochrome and may include only a set of luma samples. [0050] As part of encoding video data, video encoder 20 can encode images of video data. In other words, the video encoder 20 can generate encoded representations of the images of the video data. A coded representation of an image can be referred to here as an encoded image. [0051] To generate an encoded representation of an image, video encoder 20 can encode blocks of the image. The video encoder 20 may include, in a bit stream, an encoded representation of the video block. In some examples, to encode a block of the image, the video encoder 20 performs intra-prediction, prediction or interpredition to generate one or more predictive blocks. In addition, video encoder 20 can generate residual data for the block. The residual block Petition 870190061606, of 7/2/2019, p. 29/147 24/109 comprises residual samples. Each residual sample can indicate a difference between a sample from one of the generated predictive blocks and a corresponding sample from the block. The video encoder 20 can apply a transform in blocks of residual samples to generate transform coefficients. In addition, video encoder 20 can quantize the transform coefficients. In some examples, video encoder 20 may generate one or more elements of syntax to represent a transformation coefficient. The video encoder 20 can entropy encode one or more of the syntax elements that represent the transformation coefficient [0052] More specifically, when encoding video data according to HEVC or other video encoding specifications, to generate an encoded representation of an image, the video encoder 20 can divide each set of image samples into encoding tree blocks (CTBs) and encode the CTBs. A CTB can be an NxN Block of samples in a sample set of an image. In the main HEVC profile, the size of a CTB can vary from 16x16 to 64x64, although technically 8x8 CTB sizes can be supported. [0053] An encoding tree unit (CTU) of an image may comprise one or more CTBs and may comprise syntax structures used to encode samples from one or more CTBs. For example, each CTU can comprise a CTB of luma samples, two corresponding CTBs of chroma samples, and syntax structures used to encode the CTB samples. In images or monochrome images having three color planes Petition 870190061606, of 7/2/2019, p. 30/147 25/109 separate, the CTU can comprise a single CTB and syntax structures used to encode the CTB samples. The CTU can also be referred to as a tree block or a larger coding unit (LCU). In this description, a syntax structure can be defined as zero or SVT plus syntax elements present in a bit stream in a specified order. In some codecs, an encoded image is an encoded representation containing all CTUs in the image. [0054] To encode a CTU of an image, the video encoder 20 can divide the CTBs of the CTU into one or more encoding blocks. A coding block is an NxN block of samples. In some codecs, to encode a CTU of an image, the video encoder 20 can recursively quad-tree partition the encoding tree blocks of a CTU to divide the CTBs into encoding blocks, thus the name tree units of coding. Coding unit (CU) can comprise one or more coding blocks and syntax structures used to encode samples from one or more coding blocks. For example, a CU may comprise a luma sample coding block and two corresponding chroma sample code encoding blocks that have a luma sample set, Cb sample set and Cr sample set, and structures syntax used to encode the samples of the encoding blocks. Monochrome images or figures having three separate color planes, a CU can comprise a simple coding block and syntax structures used to encode the samples in the coding block. Petition 870190061606, of 7/2/2019, p. 31/147 26/109 [0055] In addition, video encoder 20 can encode CUs of an image of the video data. In some codecs, as part of encoding a CU, the video encoder 20 can divide a CU encoding block into one or more prediction blocks. A prediction block is a rectangular block (that is, square or non-square) of samples to which the same prediction is applied. A CU's prediction unit (PU) may comprise one or more prediction blocks from a CU and syntax structures used to predict one or more prediction blocks. For example, a PU can comprise a luma sample prediction block, two corresponding chroma sample prediction blocks, and syntax structures used to predict the prediction blocks. In monochrome images or images having three separate color planes, the PU can comprise a single prediction block and syntax structures used to predict the prediction block. [0056] The video encoder 20 can generate a predictive block (for example, a luma, Cb and Cr predictive block) for a prediction block (for example, a luma, Cb and Cr prediction block) of a PU From a ASS. The video encoder 20 can use intra-prediction or inter-prediction to generate a predictive block. If the video encoder 20 uses an intra-prediction to generate a predictive block, the video encoder 20 can generate the predictive block based on decoded samples of the image that includes the CU. If video encoder 20 uses interpretation to generate a predictive block from a PU From a current image, video encoder 20 can generate the predictive block from a PU Petition 870190061606, of 7/2/2019, p. 32/147 27/109 based on decoded samples of a reference image (that is, an image other than the current image). In HEVC, the video encoder 20 generates a forecast unit syntax structure within a coding unit syntax structure for interpreting PUs, but does not generate a forecast unit syntax structure within a unit syntax structure coding for the planned PUs. Instead, in HEVC, the syntax elements related to the PUs are included directly in the coding unit syntax structure. [0057] Video encoder 20 can generate one or more residual blocks for a CU. For example, video encoder 20 can generate a luma residual block for the CU. Each sample in CU's residual luma block Indicates a difference between a luma sample in one of CU's predictive luma blocks and a Corresponding Sample in the original CU luma coding block. In addition, video encoder 20 can generate a residual block Cb for the CU. Each sample in the residual Cb block of a CU can indicate a difference between a Cb sample in one of CU's predictive Cb blocks and a corresponding sample in the CU's Original Cb coding block. The video encoder 20 can also generate a residual block Cr for the CU. Each sample in the residual CU 's Cr block can indicate a difference between a Cr sample in one of the CU predictive Cr blocks and a corresponding sample in the CU Original Cr coding block. [0058] In addition, video encoder 20 can decompose the residual blocks of a CU into one or more Petition 870190061606, of 7/2/2019, p. 33/147 28/109 transform blocks. For example, video encoder 20 can use quad-tree partition to decompose residual blocks from a CU into one or more transform blocks. A transform block is a rectangular block (for example, square or non-square) of samples to which the same transform is applied. A transform unit (TU) of a CU can comprise one or more transform blocks. For example, a TU may comprise a luma sample transformation block, two corresponding chroma sample transform blocks and syntax structures used to transform the transformed block samples. Thus, each CU of a CU can have a luma transform block, a Cb transform block AND a Cr transform block. The TU's luma transformed block may be a sub-block of the CU's residual luma block. The transformed block Cb can be a sub-block of the residual block De Cts Cb. The Cr transform block can be a sub-block of the residual De Cr block. Monochrome images or images having three separate color planes, a TU can comprise a simple transformation block and syntax structures used to transform samples from the transform block. [0059] The video encoder 20 can apply one or more transforms to a transformation block of a TU to generate a coefficient block for the TU. A coefficient block can be a two-dimensional set of transform coefficients. A transform coefficient can be a scalar quantity. In some examples, one or more transforms convert the block of Petition 870190061606, of 7/2/2019, p. 34/147 29/109 transformed from a pixel domain to a frequency domain. Thus, in such examples, a transform coefficient can be a scalar quantity considered as a frequency domain. A transformation coefficient level is an integer quantity representing a value associated with a particular two-dimensional frequency index in a decoding process before scaling to compute a transformation coefficient value. [0060] In some examples, the video encoder 20 skips the application of the transforms with the transform block. In such examples, the video encoder 20 can treat residual sample values in the same way as the transform coefficients. Thus, in the examples where the video encoder 20 skips the application of the transforms, the following discussion of transform coefficients and coefficient blocks may be applicable for transforming residual sample blocks. [0061] After the generation of a coefficient block, the video encoder 20 can quantize the coefficient block to possibly reduce the amount of data used to represent the coefficient block, potentially providing additional compression. Quantification generally refers to a process in which a range of values is compressed to a single value. For example, quantification can be done by dividing a value by a constant and then rounding to the nearest integer. To quantize the coefficient block, the video encoder 20 can quantize transform coefficients of the block Petition 870190061606, of 7/2/2019, p. 35/147 10/30 coefficient. In some examples, video encoder 20 skips quantization [0062] Video encoder 20 can generate syntax elements that indicate some or all potentially quantized transform coefficients. Video encoder 20 can entropy encode one or more of the syntax elements that indicate a quantized transform coefficient. For example, video encoder 20 can perform Adaptive Context Binary Arithmetic Coding (CAB AC) on the syntax elements that indicate the quantized transform coefficients. Thus, an encoded block (e.g., an encoded CU) can include the entropy-encoded syntax elements that indicate the quantized transform coefficients [0063] The video encoder 20 can output a bit stream that includes encoded video data. In other words, the video encoder 20 can output a bit stream that includes an encoded representation of video data. The encoded representation of the video data can include an encoded image representation of the video data. For example, the bit stream may comprise a sequence of bits that form a representation of the encoded images of the video data and associated data. In some examples, a representation of an encoded image may include encoded representations of blocks of the image. [00 64] A bit stream may comprise a sequence of network abstraction layer (NAL) units. An External unit is a syntax structure containing a Petition 870190061606, of 7/2/2019, p. 36/147 31/109 indication of the data type in the External unit and bytes containing the data in the form of a gross byte sequence payload (RBSP) interspersed as necessary with emulation prevention bits. Each of the External units can include an External unit header and can encapsulate an RBSP. The External unit header can include a syntax element that indicates an External unit type code. The External unit type code specified by the External unit header of an External unit indicates the type of External unit. The RBSP can be a syntax structure containing an integer number of bytes which is encapsulated within an external Unit. In some cases, the RSP includes zero bits. [0065] The video decoder 30 can receive a bit stream generated by the video encoder 20 As noted above, the bit stream can comprise an encoded representation of video data. The video decoder 30 can decode the bit stream to reconstruct images from the video data. As part of decoding the bit stream, the video decoder 30 can obtain elements of syntax from the bit stream. The video decoder 30 can reconstruct images of the video data based at least in part on the syntax elements obtained from the bit stream. The process for reconstructing images from the video data can generally be alternated with the process performed by the video encoder 20 to encode the images. [0066] For example, as part of decoding an image of the video data, the video decoder 30 can use prediction between prediction Petition 870190061606, of 7/2/2019, p. 37/147 32/109 or intra-prediction to generate predictive blocks. In addition, the video decoder 30 can determine transform coefficients based on syntax elements obtained from the bit stream. In some instances, the video decoder 30 reverse quantizes the determined transform coefficients. In addition, the video decoder 30 can apply an inverse transform to the transform coefficients determined to determine residual sample values. The video decoder 30 can reconstruct an image block based on residual samples and corresponding samples from the generated predictive blocks. For example, the video decoder 30 can add residual samples to corresponding samples of the generated predictive blocks encoder to determine reconstructed samples of the block. [0067] More specifically, in HEVC and other video encoding specifications, the video decoder 30 can use interpredition or intra-prediction to generate one or more predictive blocks for each PU of a Current CU. In addition, the video decoder 30 can reverse the TU quantization coefficient blocks of the current CU. The video decoder 30 can perform inverse transforms on the coefficient blocks to reconstruct transform blocks of the current CU's TUs. Video decoder 30 can reconstruct a current CU encoding block based on samples from the predictive blocks of the current CU PUs and residual samples from the transformation blocks of the Current CU TUs. In some instances, the video decoder 30 can reconstruct the encoding blocks of Actual CU by adding the Petition 870190061606, of 7/2/2019, p. 38/147 33/109 samples of the predictive blocks for the PUs of Atual CU for the corresponding decoded samples of the VLT transformation blocks of the TUs of the current CU. By reconstructing the encoding blocks for each CU of an image, the video decoder 30 can reconstruct the image. [0068] As mentioned above, a video encoder (for example, video encoder 20 or video decoder30) can apply interpretation to generate a predictive block for a video block of a current image. For example, the video encoder can apply interpretation to generate a CU prediction block. If the video encoder applies interpretation to generate a prediction block, the video encoder generates the prediction block based on decoded samples of one or more reference images. Typically, reference images are images that are different from the current image. In some video encoding specifications, a video encoder can also treat the current image itself as a reference image. [0069] A video encoder (for example, video encoder 20 or video decoder30) begins to process a current image, the video encoder can determine one or more reference image sets (RPS) subsets for the current image. For example, in HEVC, a video encoder can determine the following subsets of RPS: RefPicSetStCurrBefore, RefPicSetStCurrAfter, RefPicSetFoll, RefPicSetLtCurr, and RefPicSetLtFoll. In addition, the video encoder can determine one or more reference image lists. Each of the reference image lists for a current image Petition 870190061606, of 7/2/2019, p. 39/147 34/109 includes zero or more reference images from the RPS of the current image. One of the reference image lists can be referred to as a reference image list 0 (RefPicList0) and another reference image list can be referred to as a reference image list 1 (RefPicList1). [0070] A slice of an image can include an entire number of blocks in the image. For example, in HEVC and other video encoding specifications, a slice of an image can include an integer number of CTUs in the image. The CTUs of a slice can be ordered consecutively in a scan order, such as a scan scan order. In HEVC and other video encoding standards, a slice is defined as an integer number of CTUs contained in an independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) ) within the same access unit. In addition, in HEVC and other video encoding standards, a slice segment is defined as an integer number of CTUs ordered consecutively in the scan of tiles and contained in a single external unit. A mosaic scan is a specific sequential sort of CTBs by partitioning an image in which the CTBs are sorted consecutively in CTB scan in a mosaic, while the mosaics in an image are ordered consecutively in a scan scan of the image mosaics. A mosaic is a rectangular region of CTBs within a column of mosaics in particular and a row of mosaics in particular in an image. Petition 870190061606, of 7/2/2019, p. 40/147 35/109 [0071] As noted above, a bit stream may include a representation of encoded images of the video data and associated data. Associated data can include sets of parameters. External units can encapsulate Rbps for video parameter sets (VPS), sequence parameter sets (SPSs) and image parameter sets (PPSs). A VPS is a syntax structure comprising elements of syntax that apply to zero or more entire encoded video sequences (CVSs). An SPS is also a syntax structure comprising elements of syntax that apply to zero or more CVSs. An SPS can include a syntax element that identifies a VPS that is active when the SPS is active. Thus, the syntax elements of a VPS may be more generally applicable to the syntax elements of an SPS. A PPS is a syntax structure comprising elements of syntax that apply to zero or more encoded images. A PPS can include a syntax element that identifies an SPS that is active when the PPS is active. A slice header for a slice segment can include a syntax element that indicates a PPS that is active when the slice segment is being encoded. [0072] As discussed above, a video encoder can generate a bit stream that comprises a series of External units. In multi-layer video encoding, different External bit stream units can be associated with different bit stream layers. A layer can be defined as a set of external VCL units AND associated non-VCL external units that have the same layer identifier. A layer can Petition 870190061606, of 7/2/2019, p. 41/147 36/109 be equivalent to a view in multivia video encoding. In multi-view video encoding, a layer can contain all the visualization components of the same layer with different time instances. Each visualization component can be a coded image of the video scene belonging to a specific view at a specific time instance. Multi-layered video encoding, the term access unit can refer to a set of images that correspond to the same time instance. Thus, a visualization component can be a coded representation of a view in a single access unit. In some examples, a visualization component may comprise a texture visualization component (that is, a texture image) or a depth visualization component (that is, a depth image). [0073] In some multi-video encoding examples, a layer may contain either all images of encoded depth from a specific view or images of encoded texture from a specific view. In other examples of multi-video encoding, a layer may contain both texture display components and depth display components for a specific view. Similarly, in the context of scalable video encoding, a layer typically corresponds to encoded images having different video characteristics than images encoded in other layers. Such video features typically include spatial resolution and quality level (for example, Signal-to-Noise Ratio). Petition 870190061606, of 7/2/2019, p. 42/147 37/109 [0074] For each respective bit stream layer, data in a lower layer can be decoded without reference to data in any upper layer. In scalable video encoding, for example, data in a base layer can be decoded without reference to data in an enhancement layer. In general, External drives can only encapsulate data from a single layer. Thus, External units encapsulating data from the higher remaining layer of the bit stream signal can be removed from the bit stream without affecting the ability to decode the data in the remaining layers of the bit stream. In multi-view coding, higher layers may include additional view components. In SHVC, higher layers can include a signal-to-noise (SR) improvement data, spatial improvement data, and / or temporal improvement data. In MV-HEVC and SHVC, a layer can be referred to as a base layer if a video decoder can decode images in the layer without reference to data from any other layer. The base layer can conform to the HEVC base specification (for example, Rec. ITU-T H. 265 / ISO / IEC 23008-2). [0075] In scalable video encoding, layers other than the base layer can be referred to as enhancement layers and can provide information that increases the visual quality of video data decoded from the bit stream. Scalable video encoding can increase spatial resolution, signal-to-noise ratio (ie, quality) or time rate. Petition 870190061606, of 7/2/2019, p. 43/147 38/109 [0076] Multi-view coding can support intervision prediction. The prediction between vision is similar to the interpretation used in HEVC E and can use the same syntax elements. However, when a video encoder performs prediction between vision on a current video unit (such as a PU), video encoder 20 can use, as a reference image, an image that is on the same access unit as the unit current video, but in a different view. In contrast, conventional interprognosis uses images in different access units as reference images. [0077] In multi-view encoding, a view can be referred to as a base view if a video decoder (for example, a video decoder30) can decode images in the view without reference to images in any other view. When encoding an image in one of the non-base views, a video encoder (such as video encoder 20 or video decoder30) can add an image to a reference image list if the image is in a different view, but within of the same time instance (ie, Access Unit) as the image that the video encoder is currently encoding. Similar interpreting reference images, the video encoder can insert an inter-view prediction reference image at any position in a reference image list. [0078] For example, External units may include headers (ie, NAL unit headers) and payloads (for example, Rbps). External unit headers can include element identifier syntax Petition 870190061606, of 7/2/2019, p. 44/147 39/109 layer (for example, nuh_layer_id syntax elements in HEVC). External units that have layer identifier syntax elements that specify different values belong to different layers in a bit stream. Thus, in multi-layer visualization encoding (for example, MV-HEVC, SVC or SHVC), the layer element identifier syntax of the External unit specifies a layer identifier (that is, a layer ID) of the External unit. The layer identifier of an External unit is equal to 0 if the External unit relates to a base layer in multi-layer encoding. Data in a base layer of a bit stream can be decoded without reference to data in any other layer of the bit stream. If the External unit does not relate to a base layer in multi-layer encoding, the layer identifier of the External unit may have a non-zero value. In multi-view encoding, different layers of a bit stream can correspond to different views. Scalable video encoding (for example, SVC or SHVC), layers other than the base layer can be referred to as enhancement layers and can provide information that increases the visual quality of the decoded video data from the bit stream. [0079] In addition, some images in one layer can be decoded without reference to other images within the same layer. Thus, External drives encapsulating data from certain images on a layer can be removed from the bit stream without affecting the decoding of other images on the layer. Removing Petition 870190061606, of 7/2/2019, p. 45/147 40/109 External units encapsulating the data of such images can reduce The frame rate of flow of bits. a subset in images in one layer that can to be decoded without reference to other images inside gives layer can to be referred to here as a sub-layer, temporal layer or temporal sub-layer. A higher temporal layer can include all images in the layer. Thus, time scalability can be achieved within a layer by defining a group of images with a particular time level as a sublayer (i.e., time layer). [0080] External units may include elements of time identifier syntax (for example, time id in HEVC). The Temporal identifier syntax element of an External unit specifies a Temporal identifier for the External unit. The temporal identifier of an External unit identifies a temporal sublayer with which the External unit is associated. Thus, each temporal sub-layer of a bit stream can be associated with a different temporal identifier. If the temporal identifier of a first External unit is less than the temporal identifier of a second External unit, the data encapsulated by the first External unit can be decoded without reference to the data encapsulated by the second External unit. [0081] Video encoding standards include ITU-T H.261, ISO / IEC MPEG -1 Visual, ITU-T H.262 or Visual ISO / IEC MPEG -2, ITU-T H.263, ISO / IEC MPEG -4 Visual and ITUT H.264 (also known as ISO / IEC MPEG -4 AVC), including its Scalable video encoding (SVC) and Petition 870190061606, of 7/2/2019, p. 46/147 41/109 encoding of multivia video (MVC) extensions. In addition, a new video coding standard, namely, High Efficiency Video (HEVC) coding was recently developed by the Joint Video Coding Collaboration Team (JCT-VC) of the ITU- Video Coding Specialists Group T (VCEG) and a Group of ISO / IEC Motion Image Specialists (MPEG). Wang et al. High Efficiency Video Coding (HEVC) Defect Report, Collaborative team in Video coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11, 14th Meeting, Vienna, AT, July 25 to August 2, 2013, document JCTVC-N1003-vl, is a HEVC traction specification. The HEVC standard was completed in January 2013. [0082] ITU-T VCEG (Q6 / 16) and ISO / IEC MPEG (JTC 1 / SC 29 / WG11) are now studying the potential need for standardization of future video encoding technology with a compression capacity that significantly exceeds that Current HEVC standard. (Including your current extensions and long-term extensions for encoding screen content and high dynamic range encoding). The groups are working together in this exploration activity in a joint collaboration effort known as the Joint Video Exploration Team (JVET) to evaluate compression technology projects proposed by their experts in this area. The first JVET found during 19-21 October 2015. The joint exploration model (JEM) is a test model produced by JVET. J Chen et al. Description of Exploration Experiments in Coding Tools Petition 870190061606, of 7/2/2019, p. 47/147 42/109 JVET-D1011, Chengdu, October 2016 is an algorithm description for the fourth version of JEM (that is, JEM4). [0083] In the field of video coding, it is common to apply filtering in order to increase the quality of a decoded video signal. The filter can be applied as a post-filter, where the filtered frame is not used to predict future frames or as a loop filter, where the filtered frame is used to predict future frames. A filter can be designed, for example, by minimizing the error between the original signal and the decoded filtered signal. Similar to the transform coefficients, the filter coefficients h (k, l), k = K, ..., K, 1 = -K, ... K can be quantified as follows: f (k, l) = round (normFactor'h (k, 1)) and encoded and sent to a decoder. The normal actor is usually equal to 2n. The higher the value of the normal actor, the more accurate the quantification and the quantized filter coefficients f (k, l) provide better performance. On the other hand, higher normFactor values that produce coefficients f (k, l) requiring more bits for transmission. [0084] In the video decoder 30, the decoded filter coefficients f (k, l) are applied to the reconstructed image R (i.j) as follows: where i and j are the coordinates of the pixels within the frame. The adaptive loop filter was evaluated in HEVC stage, but not included in the final version. [0085] The adaptive loop filter in loop Petition 870190061606, of 7/2/2019, p. 48/147 43/109 employed in JEM was described in J Chen and others, research coding tools for next generation video coding, SG16-Geneva-COO, January 2015. The basic idea is the same as ALF with block-based adaptation in T. Wiegand et al., WD3: Working Draft 3 of High Efficiency Video Coding, ITU-T SGI 6 WP3 and ISO / IEC JTC1 / SC29 / WG11, JCTVC-E603 collaborative video coding team (JCT-VC), 5th Meeting: Geneva, CH, March 16 to 23 , 2011, hereinafter JCTVC-E603. [0086] For the luma component, the 4x4 blocks in the complete image are classified based on the direction of 1-dimensional lapian (up to 3 Directions) bidimensional Laplaceana activity (up to 5 activity values). The calculation of the Dir b position and the unquantified activity £. Act b is shown in equations (2) to (5), where m indicates a reconstructed pixel with relative coordinate (ij) to the upper left of a 4x4 block. Acb is additionally quantized for the range 0 to 4, inclusive, as described in JCTVC-E603. / Set * “Xts * JW) (5) [0087] In total, each block can be categorized into one out of 15 (5x3) groups and an index is assigned to each 4x4 block according to the value of the Acb bandwidth the block. Denotes the group index by C and adjusted C equal to 5 Dirb +  where  is the quantized value Petition 870190061606, of 7/2/2019, p. 49/147 44/109 of Actfa. Therefore, video encoder 20 can signal up to 15 sets of ALF parameters for the luma component of an image. To save the cost of signaling, the video encoder 20 can merge the groups over the group index value. For each joined group, the video encoder 20 can signal a set of ALF coefficients. Figure 2 illustrates three different sample ALF filter holders. In the example in Figure 2, up to three symmetric circular filter formats are supported. For both chroma components in an image, a single set of ALF coefficients is applied and the 5x5 diamond shape filter is always used. [0088] At the side decoder, O decoder video 30 may filter each sample in · Pixel / video ís , resulting in a pixel value '* r as shown in equation (6), where L denotes O filter length, fm, n represents the filter coefficient and indicates filter displacement. In some projects, only up to one filter is supported for the two chroma components. [0089] The following is a list of data that can be flagged for filter coefficients 1. Total number of filters: total number of filters (or total number of groups together) is first flagged when ALF is enabled for a slice. The total signaled number of filters applies to the luma component. For chroma components, since only one ALF filter can be applied, there is no need to signal the number Petition 870190061606, of 7/2/2019, p. 50/147 45/109 total filters. 2. Filter support: An index of the three filter supports is signaled. 3. filtering index: Indicates which ALF filter is used, that is, class fusion information. Classes that have non-consecutive C values can be merged, that is, share the same filter. By coding a flag for each class to indicate whether or not the class is fused, the filter index can be derived. In some instances, the class fusion information can also be flagged to merge from a left or above filtering index. 4. forceCoeffO flag: the forceCoeffO flag is used to indicate whether at least one of the filters should not be coded. When this flag is equal to 0, all filters must be coded. When the forceCoeffO flag is equal to 1, an indicator for each joined group, indicated by CodedVarBin, is also flagged to indicate that the filter should be flagged or not. When the filter is not signaled, it means that all filter coefficients associated with the filter are equal to 0. 5. Prediction method: When multiple groups of filters need to be flagged, one of two methods can be used: • All filters are encoded directly in the filter information. In this case, for example, the values of the filter coefficients can be encoded in the bit stream without using any predictive encoding techniques. In other words, filters are Petition 870190061606, of 7/2/2019, p. 51/147 46/109 explicitly flagged. • The filter coefficients of a first filter are coded directly. While for the remaining filters, the filter coefficients are coded predictively in the filter information. In this case, the values of the filter coefficients can be defined by residual values or differences relative to the filter coefficients associated with a previously coded filter. The previously coded filter is the one that is the most recent filter (that is, the filter indices of the current filter and its predictor are consecutive). To indicate the use of one of the two prediction methods above, the video encoder 20 can signal an indicator when the number of joined groups is greater than 1 and the output force is equal to 0. [0090] A set of ALF parameters can include one or more of the syntax elements listed above and can also include filter coefficients [0091] A video encoder (for example, video encoder 20 or video decoder 30) can also use temporal prediction of filter coefficients. The video encoder can store ALF coefficients of previously encoded images and can reuse the ALF coefficients of previously encoded images as ALF coefficients of a current image. The video encoder 20 can choose to use the stored ALF coefficients for the current image and the ALF coefficient deviation signaling. In this case, video encoder 20 signals only one index for one of the reference images (which is effectively equal to the candidate's index Petition 870190061606, of 7/2/2019, p. 52/147 47/109 in the stored set for ALF parameters), and the stored ALF coefficients of the indicated image are simply inherited to the current image. To indicate the use of time prediction, the video encoder 20 can first encode an indicator that indicates the use of time prediction, before sending the index to the reference image. [0092] In JE4, Video encoders store ALF parameters of a maximum of six previously encoded images that are encoded with signaled ALF parameters (that is, time prediction is disabled) in a separate arrangement. A video encoder effectively empties the set for intra-random access point (TRAP) images. To avoid duplicates, the video encoder only stores ALF parameter values in the array if the ALF parameter values are explicitly flagged. The ALF parameter store operates in a FIFO mode, so that the array is full, the video encoder overwrites the oldest ALF Parameter values (ie ALF parameters) with a new set of ALF parameter values, in order decoding. [0093] In M Karczwicz and others, improvements in the adaptive loop filter, Exploration team (JVET) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11, Doc. JVET-B0060_rl, 2a meeting: San Diego, USA, February 20-26, 2016 (hereafter, JVET-B0060), the ALF based on Geometric Transformations (GALF) is proposed. In GALF, the classification is modified with diagonal gradients taken into account and Petition 870190061606, of 7/2/2019, p. 53/147 48/109 geometric transformations can be applied to filter coefficients. [00 94] Based on all gradient information including horizontal, vertical and diagonal gradients, one of the four geometry transformations of filter coefficients is determined. That is, samples classified in the same category will share the same filter coefficients. However, the filter support region can be transformed based on the selected geometric transformation index. The method described in JVET-B0060 can effectively reduce the number of filters that must be sent to the decoder, thus reducing the number of bits required to represent them or, alternatively, reduce the differences between reconstructed frames and original frames. Each 2x2 block is categorized into one of 25 classes based on its directionality and a quantized activity value. [0095] In addition, in JVET-B0060, to improve the coding efficiency when time prediction is not available (for example, within frames), a video encoder assigns a set of 16 fixed filters to each class. That is, 16 * 25 filters (classes) can be pre-defined. To indicate the use of a fixed filter, a flag for each class is flagged and, if required, the index of the fixed filter. Even when the fixed filter is selected for a given class, the adaptive filter coefficients f (k, l) can still be sent to this class, in which case the filter coefficients that will be applied to the reconstructed image are the sum of both sets coefficients. One or more of the Petition 870190061606, of 7/2/2019, p. 54/147 49/109 classes can share the same coefficients f (k, l) signaled in the bit stream, even if different fixed filters were chosen for them. US Patent Publication No. 2017/0238020, published on August 17, 2017, describes how fixed filters could also be applied to intercodified frames. [0096] In JVET-B0060, the project of temporal prediction of frames previously coded as in the second version of JEM (ie, JE2) is kept unchanged. JEMP2 is described in Jianle Chen et al, Description of Joint Exploration Test Algorithm Model 2, Conjugated Video Exploration Team (JVET) from ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11 , 2nd Meeting, San Diego, USA, 20-26 February 2016, document JVET-B1001_v3. That is, a flag is coded to indicate whether the temporal prediction of ALF coefficients is used. If the temporal prediction of ALF coefficients is used, an index of the selected images for stored ALF parameters is also signaled. In this case, there is no need to signal the filter indices for each class and filter coefficients. [0097] In addition, the explicit coding of ALF filter coefficients can be used with GALF. For example, a prediction pattern and a prediction index from fixed filters can be explicitly encoded in GALF. Three cases are defined: Case 1: if none of the filters of the 25 classes are provided from the fixed filters; Case 2: all filters of the classes are predicted from the fixed filters; and Case 3: filters associated with some classes are Petition 870190061606, of 7/2/2019, p. 55/147 50/109 predicted from fixed filters and hT filters associated with resting classes are not predicted from fixed filters. An index can first be coded to indicate one of three cases. In addition, the following applies: • If the indicated case is case 1, there is no need to signal the fixed filter index anymore. Otherwise, if the indicated case is case 2, an index of the fixed filter selected for each class is signaled. • Otherwise, if the indicated case is case 3, a bit for each class is first. • Flagged, and if a fixed filter is used, the index of the fixed filter is also flagged [0098] In GALF, to reduce the number of bits required to represent the filter coefficients, different classes can be merged. However, unlike JCTVC-E603, any set of classes can be joined, even classes having non-consecutive values of C the information about which the classes are merged is provided by sending to each of the 25 index classes i c . Classes having the same index i c share the same filter coefficients that are coded. The i c index is encoded with a truncated fixed-length method. Similarly, the forceCoeffO flag can also be used. When the strength indicator is equal to 1, a one-bit flag, indicated by CodedVarBín, is also signaled for each of the joined groups (all filters to be coded) to indicate Petition 870190061606, of 7/2/2019, p. 56/147 51/109 if the signaling filter coefficients are all zero. In addition, when forceCoeffO is equal to 1, predictive encoding (that is, encoding the difference between the current filter and the previously encoded filter) is disabled. A prediction from fixed filters is allowed, the filters to be signaled / coded above are the differences between the filter applied to the reconstructed image and the selected fixed filter. Other information, such as the coefficients, is encoded in the same way as in JEM20 [00100] Because GAF is a form of ALF, this description can use c > ALF term for application both the ALF as GALF. [00101] The current projects to the prediction filter time in ALF and GALF feature various deficiencies. Per example if an Image uses explicit coding of filters, after decoding the image, the corresponding ALF filters can be added to a set of ALF filters for temporal prediction, regardless of temporal layers. That is, after decoding the image, a video encoder can include a set of ALF parameters in an entry in the set. The set of ALF parameters can include filter coefficients and group fusion information for each of the ALF filters used in the image. This design leads to failure when decoding a subset of time layers under certain configurations, such as random access. An example is given in Figure 3, where the GOP size is equal to 16. In the example in Figure 3, five time layers are supported (indicated by up to T 4 ). Order of Petition 870190061606, of 7/2/2019, p. 57/147 52/109 image encoding / decoding is: Image order counter (POC) 0 [T o ], POC 16 [T o ], POC8 [Τχ], POC4 [T 2 ], POC2 [T 3 ], POC1 [ T 4 ], POC3 [T 4 ], POC7 [T 3 ], POC5 [T 4 ], POC7 [T 4 ], POC12 [T 2 ], POCIO [T 3 ], POC9 [T 4 ], POC11 [T 4] ], POC14 [T 3 ], POC 13 [T 4 ], POC 15 [T 4 ]. Arrows with different stroke patterns point to images that can use the pointed images as reference images. Note that Figure 3 omits certain arrows for clarity. [00102] Figure 4A illustrates a set 50 for storing filter parameters. Figure 4B illustrates a different state of set 50. Assuming that each image is encoded With ALF enabled and ALF filters for each image are explicitly flagged, before POC3 decoding of Figure 3, the set for stored filters has the state shown in Figure 4A. After POC3 decoding and before P0C6 decoding in Figure 3, the stored ALF filter set is updated as shown in figure 4B. As shown in the example of FIG. 4B, the filters for the POCO were replaced by the filters for O POC3 because the filters are replaced in a FIFO mode and the filters for the POCO were the first filters added to the set 50. [00103] Therefore, for POC6 decoding with a temporal layer index (Templdx) equal to 3, POC filters 1, POC3 with a temporal layer index equal to 4 are required to be decoded. This conflict with the spirit of time scalability, in which the decoding of an image with a certain Templdx value should not be based on images with a higher De T empIdx value. Petition 870190061606, of 7/2/2019, p. 58/147 53/109 [00104] A second shortcoming of current designs for Time Prediction of Filters in ALF is that, when Time Prediction of ALF Filters is enabled for a slice, in some examples, all ALF filters in a given previously coded frame must be inherited. This means that the merging of filter classes and coefficients is directly reused without the possibility in slightly modify the classes and coefficients in filter for better capture the characteristics in a current slice.[00105] The following techniques are proposed to resolve an or more of the shortcomings of the projects for the prediction of temporal filters in ALF mentioned above. The following item techniques can be applied individually. Alternatively, any combination of them can be applied. [00106] According to a first technique, multiple arrangements can be allocated to store one or more sets of previously coded ALF filters. In other words, a video encoder can store sets of ALF parameters in a plurality of sets. Each set corresponds to an assigned temporal layer index (Templdx, which is equivalent to the Temporal defined in the HEVC specification). According to the first technique, each set contains only ALF parameters of images with the same lower Templdx or Templdx. A slice (or other unit for Performing ALF) with Templdx can select a set of composite filters in that set. In other words, a video encoder can be applied to samples in a slice block, filter Petition 870190061606, of 7/2/2019, p. 59/147 54/109 ALF based on ALF Parameters in the set corresponding to the slice's Templdx. For a region that is encoded with ALF enabled, and assuming that the ALF parameters are explicitly flagged (that is, time forecast without time prediction), the ALF parameter set for that region can be added to the set associated with the same window or higher. This can resolve the deficiencies described above with respect to the set of stored ALF parameters including one or more ALF Parameters corresponding to ALF Filters used in images of temporal layers higher than the temporal layer of the current image. [00107] Figure 5 illustrates a plurality of sets 60A -60 E (collectively, arrangements 60) corresponding to different temporal layers, according to a technique of this description. In the example in Figure 5, assuming that each image in Figure 3 is encoded with ALF enabled and the ALF filters for each image are explicitly flagged, before the PO6 decoding of Figure 3, the arrangements for stored ALF filters have the states shown in Figure 5. [00108] In the example of Figure 5, since POC6 is in the temporal layer T3, a video encoder can use ALF filters from the 60D set. Thus, unlike the example in Figure 4B, whether or not POC1 is decoded has no impact on which ALF Filters are available for use when POC 6 Decoding. [00109] In this way, according to the first technique, the video encoder 20 can generate a stream of Petition 870190061606, of 7/2/2019, p. 60/147 55/109 bits that includes an encoded representation of a current image of the video data. A current region (for example, slice or other type of unit for performing ALF) of the current image associated with a time index (that is, a time layer index) indicating a time layer to which the current region belongs. In addition, video encoder 20 reconstructs all or part of the current image. The video encoder 20 stores, in a plurality of arrays, sets of ALF parameters used in the application From ALF filters to samples of image regions of the decoded video data before the current region. For example, for each respective set of a plurality of sets that correspond to different time layers, the video encoder 20 can store, in the respective set, sets of ALF parameters used in the application From ALF filters to samples of image regions of data from video that are decoded before the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set. Each respective set of the plurality of sets corresponds to a respective different time layer. In addition, video encoder 20 determines, based on a selected set of ALF parameters, in one of the dispositions corresponding to the temporal layer to which the current region belongs or corresponding to a lower temporal layer than the temporal layer to which it belongs the current region, applicable set of ALF parameters for the current region. In some examples, video encoder 20 can determine the selected set of Petition 870190061606, of 7/2/2019, p. 61/147 56/109 ALF parameters based on a rate distortion analysis of the ALF parameter sets in the arrays. The video encoder 20 can signal an index of the selected set of ALF parameters in the bit stream. In addition, in this example, video encoder 20 applies, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. Applying adaptive loop filtering to the current region may comprise applying an ALF filter to one or more, but not necessarily, all blocks within the current region. After applying adaptive loop filtering to the current region, the video encoder 20 can use the current region to predict a subsequent image of the video data. Similarly, according to the first technique, the video decoder 30 can receive a bit stream that includes an encoded representation of a current image of the video data. A current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs. The video decoder 30 can then reconstruct all or part of the current image. Additionally, the video decoder 30 stores, in a plurality of sets, sets of ALF parameters used in the application of ALF filters to samples of image regions of the decoded video data before the current image. Each respective set of the plurality of sets corresponds to a respective different time layer. For example, for each respective set of a plurality of sets that correspond to different time layers, the Petition 870190061606, of 7/2/2019, p. 62/147 57/109 video decoder 30 can store, in the respective set, ALF parameter sets used in the application From ALF filters to samples of image regions of the video data that are decoded before the current region and that are in the corresponding time layer set or a lower temporal layer than the corresponding temporal layer. The video decoder 30 determines, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs or sets of the network, plurality of dispositions corresponding to a lower temporal layer than the temporal layer to which it belongs the current region, applicable set of ALF parameters for the current region. In some examples, the video decoder 30 determines the selected set of ALF parameters based on a signaled index in the bit stream. The video decoder 30 can then apply, based on the applicable set of ALF parameters for the current region, loop adaptive filtering for the current region. Applying the ALF filter to the current region may include applying the ALF filter to one or more, but not necessarily, all the blocks within the current region. [00111] Each set called Templdx can comprise sets of previously decoded filters associated with equal or inferior images. For example, the set k-t is assigned to be associated with Templdx equal to k, and can contain only the complete sets or subsets of filters (for example, ALF parameters for filters) of images with Templdx equal Petition 870190061606, of 7/2/2019, p. 63/147 58/109 or less than k. [00112] Thus, for each respective set of the plurality of provisions, a video encoder (for example, video encoder 20 or video decoder 30) can store in the respective sets of ALF parameter sets used in the application From ALF filters to samples of regions of the video data images decoded before the current module that belongs to the time layer corresponding to the respective set and which belongs to time layers lower than the time layer corresponding to the respective set. [00113] In some examples, the number of filter sets associated with different sets may be different (which could be greater than or equal to 0). Alternatively, in some examples, the numbers of the filter sets associated with different time layers may be different and may depend on the time layer index. So, in some examples, at least two out of the plurality of sets include many different numbers in ALF parameter sets. For example, in example gives FIG. 5, can be unnecessary Tue five locations at the set 60A because of a GOP of 16 images, Never there will be more two images in layer temporal Like this, the set 60A can only have two locations. Similarly, in the example of Figure 5, in a GOP of 16 images, there will be at most one image in the temporal layer Τχ. Thus, the 60B assembly can have only three locations. [00114] In some examples, after encoding a certain slice / unit to perform ALF, Petition 870190061606, of 7/2/2019, p. 64/147 59/109 a video encoder can use the set of filters associated with the slice to update those sets associated with Templdx equal or higher. For example, a video encoder can store, in the arrangement corresponding to the temporal layer to which the current region belongs (and, in some cases, arrangements corresponding to temporal layers corresponding to higher temporal layers than the temporal layer to which the region belongs). current), a set of ALF parameters applicable to a current region (ie slice or other unit for the realization of ALF). For example, in the example in Figure 3 and Figure 5, if the current region is in an image associated with POC 8, the video encoder can update provisions 60B, 60C, 60D and 60E to include the set of ALF parameters applicable to the current region . [00115] In some examples, the POC value associated with each set of filters (for example, a set of ALF parameters) can also be recorded. Thus, a video encoder can store, in the set corresponding to the temporal layer to which a current region of a current image belongs, the POC value of the current image. In an example, when selecting a filter as a candidate from a given set for ALF temporal prediction, it may be necessary that the POC value associated with the filter be equal to a POC value of one of the reference images in lists of current reference image. For example, in addition to storing the ALF parameters for ALF filters used by the POCO image in Figure 5, a video encoder can store data in the 60A matrix indicating a POCO value. In this example, if the image in POCO is not in POT one Petition 870190061606, of 7/2/2019, p. 65/147 60/109 POC6 image reference image, when encoding a region of the image in POC6, video encoder 20 is not allowed to select an ALF filter from among the ALF filters stored in the 60A set for the POCO image. [00116] According to a second technique, a set is still used to store sets of previously coded ALF filters. In addition to filters, for each set (which may contain multiple filters used to encode a slice / image), the time layer index (Templdx) associated with the set of filters is also recorded. In other words, the time layer indices can be stored together with the ALF parameters for ALF filters. [00117] In some examples based on the second technique, the size of the set can be adjusted (number of possible temporal layers) * (maximum number of filter sets for temporal prediction for a slice / image or other unit for using ALF) . In one example, the number of possible time layers can depend on a coding structure (for example, how many levels are supported in the hierarchy B structure) or a NobackwardPredFlag low-delay check flag in the HEVC report). [00118] In one example, a maximum number of filter sets for temporal prediction for a slice / image or other unit for using ALF can be pre-defined or flagged or dependent on Templdx. In one example, the number of possible time layers is set to 5 and the maximum number of filter sets Petition 870190061606, of 7/2/2019, p. 66/147 61/109 for temporal prediction for a slice / image or other unit for using ALF, is set to 6. When encoding a slice / image, the possible candidates for temporal prediction can be decided by crossing the sets included in the arrangement and all or some sets of filters with Templdx equal or less are treated as effective candidates. [00119] Process for encoding a particular slice / unit to perform ALF, the filter set associated with the slice and the associated Templdx can be used to update the set. For example, a video encoder (for example, video encoder 20 or video decoder30) can determine, based on a selected set of ALF parameters in the array, an applicable set of ALF parameters for a region. In this example, the encoder or decoder can store, in the array, the applicable set of ALF parameters. The encoder or decoder can also store the applicable set of ALF parameters in one or more of the arrays corresponding to time layers higher than the time layer to which the current region belongs. In this example, the video encoder may not store ALF parameters in the array if the ALF parameters were not explicitly signaled in the bit stream. In some examples, the encoder or decoder stores only the applicable set of ALF parameters in the array if the applicable set of ALF parameters has not yet been stored in the array. [00120] Figure 6 illustrates a set 70 for storing ALF parameters and temporal layer index Petition 870190061606, of 7/2/2019, p. 67/147 62/109 (Templdx) values, according to the second technique of this description. In the example in Figure 6, the number of possible temporal layers is 5 and 23T a maximum number of filter sets for temporal prediction for a region is set to 6, resulting in the set 70 containing 30 entries. In the example in Figure 6, assuming that each image in Figure 3 is encoded with ALF enabled and the ALF Filters for each image are explicitly flagged, before the PO6 decoding in Figure 3, the arrangements for stored ALF filters have the states shown in Figure 6. [00121] In the example in Figure 6, a video encoder can review the Templdx values stored in set 70 to determine which inputs in set 70 store ALF parameters that the video encoder can use as predictors of ALF parameters used in encoding POC6. In doing so, the video encoder can ignore any inputs by specifying Templdx values greater than ο T3 (that is, Templdx for POC6). In contrast to the example in Figure 4B, filters for POCO are not overwritten by filters for POC3. [00122] Thus, according to the second technique of this description, the video encoder 20 can generate a bit stream that includes a coded representation of a current image of the video data. A current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs. In addition, video encoder 20 can reconstruct the current image. The video encoder 20 also stores, in an array, sets of ALF parameters Petition 870190061606, of 7/2/2019, p. 68/147 63/109 used in the application From ALF filters to sample images of the decoded video data before the current image. In addition, video encoder 20 stores, in the array, the time layer indices associated with the ALF parameter sets. A time layer index associated with a set of ALF parameters indicates a time layer of a region in which the set of ALF Parameters was used to apply an ALF Filter. In this example, video encoder 20 determines, based on a selected set of ALF parameters in the array whose associated temporal layer index indicates the temporal layer to which the current region belongs or a temporal layer lower than the temporal layer to which the region belongs. current, applicable set of ALF parameters for the current region. The video encoder 20 can then apply, based on the applicable set of ALF parameters for the current region, loop adaptive filtering for the current region. By applying adaptive loop filtering to the current region, the video encoder 20 can use the current region to predict a subsequent image of the video data. Similarly, according to the second technique of this description, the video decoder 30 can receive a bit stream that includes a coded representation of a current image of the video data. A current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs. In addition, the video decoder 30 can reconstruct the current image. In this example, video decoder 30 stores, in an array, sets of ALF parameters used in the application From filters ALF to Petition 870190061606, of 7/2/2019, p. 69/147 64/109 image samples of the decoded video data before the current image. Additionally, the video decoder 30 stores, in the array, the time layer indices associated with the ALF parameter sets. A time layer index associated with a set of ALF parameters indicates a time layer of a region in which the set of ALF Parameters was used to apply an ALF Filter. In this example, the video decoder 30 can determine, based on a selected set of ALF parameters in the array whose associated time layer index indicates the time layer to which the current region belongs, the applicable set of ALF parameters for the current region. Additionally, in this example, the video decoder 30 can apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. [00124] In some examples based on the second technique of this description, the POC value associated with each set of ALF filters can also be recorded. For example, a video encoder can also store, in an array (for example, array70), a POC value for a current image from which ALF parameters are explicitly encoded. Thus, in this example, after encoding / decoding a plurality of images, the video encoder stored, in the array, sets of ALF parameters used in the application of ALF filters to image samples of the decoded video data before a new current image. [00125] In one example, possible candidates for temporal prediction can be decided by crossing Petition 870190061606, of 7/2/2019, p. 70/147 65/109 of the sets included in the set, all or some filter sets with Templdx equal or less and those with a POC Value equal to a POC Value of one of the reference image in a current reference image list can be treated as effective candidates. For example, a video encoder can determine, based on a selected set of ALF parameters in the set whose associated time layer index indicates the time layer to which a current region of a current image belongs or a time layer lower than the temporal layer to which it belongs. which the current region belongs to, applicable set of ALF parameters for the current region. In this example, it is necessary that when determining the applicable set of ALF parameters for the current region, a POC value associated with the applicable set of ALF parameters for the current region is equal to a POC Value of a reference image in a reference image list of the current image. [00126] In some examples based on the second technique of this description, each ALF filter to be stored in the set must be associated with a reference image that is included in a reference image set of the current image (these images would also be available in the decoded image temporary storage). That is, if an image is not included in the reference image set of the current image, the filters associated with the current image cannot be stored and used for ALF temporal prediction. [00127] In some examples based on the second technique of this description, the size of the set can Petition 870190061606, of 7/2/2019, p. 71/147 66/109 depend on the size of a set of reference images. For example, the size of the set can be equal to a maximum number of LT of reference images that are allowed in a set of reference images. [00128] In some examples, a video encoder does not generate a list specifically for ALF filter parameters (that is, ALF parameters), but the list is the same as the reference image lists that are generated for the current slice . In this case, the ALF parameters associated with the reference images of the current region are stored directly together with other information (such as the reconstruction samples, movement information of each block with a region) required for storage of reference images. As another alternative, the ALF filter parameter list is set equal to the reference image set of the current slice (or image). [00129] In another example, where each ALF filter stored in the array (for example, array70) is associated with a reference image included in the reference image set of the current image, in the list (set) of ALF filter parameters (associated with reference images included in the reference image set of the current image) is separately generated independently of the reference image lists for the current slice. An efficient generation of an efficient list of ALF filter parameters, such that the most frequently used sets of ALF filter parameters are in previous positions in the list of ALF filter parameters, syntax elements for signaling a particular order of candidate sets of filter parameter ALF in the list of Petition 870190061606, of 7/2/2019, p. 72/147 67/109 ALF filter parameters can be included in a slice header, similar to the syntax for modifying the reference image list in the slice header. [00130] According to a third technique, instead of using the FIFO rule to update an arrangement (s) for stored ALF filters, it is further proposed to consider Image Order Counting Differences (POC) to update the set (s) . For example, if an arrangement (for example, arrangement 50 of Figure 4A and Figure 4B, one of arrangements 60 of Figure 5, or set 70 of Figure 6. A video encoder can determine which entry in the set stores ALF filters associated with the POC value most different from a POC value of a current image. In an example based on the first technique, when an ALF parameter set is explicitly flagged for a region of a current image, a video encoder can determine, with based on differences between a POC value from the current image and POC values associated with sets of ALF parameters, whose set of ALF parameters in the set corresponding to the time layer to which the current region belongs to replace with the applicable set of ALF Parameters for the In an example based on the second technique, when a set of ALF parameters is explicitly flagged for a region in a current image, a video encoder can determine, based on d iferences between a POC value from the current image and POC values associated with ALF parameter sets, whose ALF parameter set in the set to replace with the applicable ALF parameter set for the current region. [00131] In some examples, a separate list Petition 870190061606, of 7/2/2019, p. 73/147 68/109 selection of filters from the reference image set can be defined which is different from the selection of reference images from the reference image set. In this case, the selected filters can be of an image that is not included in any reference image list of the current slice / tile / image. [00132] According to a fourth technique, the signaling of an index of a selected set / subset of filters for ALF temporal prediction may depend on a temporal layer index. A subset of filters for ALF temporal prediction is a partial set of ALF filters. For example, there can be 25 ALF filters per image. In this example, when using time prediction, the video encoder can choose 10 instead of 25 ALF filters to be associated with an image. In one example, the truncated non-ary binarization method can be used to encode the selected index from a set of filters and the maximum value of the allowed number of sets depends on the time layer index. [00133] For example, according to an example of the fourth technique, the video encoder 20 may include, in a bit stream, a syntax element that indicates an index of a selected set of ALF parameters. Similarly, the video decoder 30 can obtain, from the bit stream, a syntax element that indicates an index of a selected set of ALF parameters. The selected set of ALF parameters can be in one of the dispositions of a type used in the first technique or in the matrix of a type used in the second technique. In this example, the video encoder 20 and / or the video decoder Petition 870190061606, of 7/2/2019, p. 74/147 69/109 video 30 can determine, based on the selected set of ALF parameters in the set, an applicable set of ALF parameters for the current region. Video encoder 20 and / or video decoder 30 may apply, based on the applicable set of ALF parameters for the current region, an ALF filter for the current region. In this example, a format of the syntax element depends on a time layer index. For example, a non-truncated non-binary method can be used to encode the syntax element and a maximum value of the allowed number of ALF parameter sets depends on the time layer index. [00134] In some examples based on the fourth technique, the signaling of the index may also depend on the differences in POC. In other words, in the context of the example in the previous paragraph, the format of the syntax element is additionally dependent on POC differences. For example, if the index is 0, the selected set of ALF parameters is associated with the image with a POC Value closest to a POC Value of a current image; if the index is 1, the selected set of ALF parameters is associated with the image with a POC value closest to the POC value of the current image, and so on. In this example, if two or more of the ALF parameter sets in the array or arrangements are associated with images having the same POC distance from the current image, the ALF parameter sets associated with images with lower (or, in others, higher) images POC values are associated with lower index values. [00135] According to a fifth technique, instead of inheriting both the filter coefficients and the Petition 870190061606, of 7/2/2019, p. 75/147 70/109 class fusion information, it is proposed that only class fusion information can be inherited. That is, the filter indices for different classes could be inherited from previously coded information. Alternatively, in addition, separate sets can be allocated with one set to record filter indices for each class and the other to record the filter coefficients. [00136] Thus, in an example according to the fifth technique, a video encoder can store, in a plurality of arrangements, sets of ALF parameters used in applying ALF filters to image samples of the decoded video data before the image current, each respective set of the plurality of sets corresponding to a respective different time layer. In this example, the video encoder can determine, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs or corresponding to a lower temporal layer than the temporal layer to which the current region belongs, the information of class fusion and not the filter coefficients. [00137] Furthermore, in some examples, the video encoder can store, in a second plurality of arrangements, sets of filter coefficients used in applying ALF filters to image samples of the decoded video data before a current image, each respective set of the second plurality of arrangements corresponding to a respective different time layer. As part of determining the Petition 870190061606, of 7/2/2019, p. 76/147 71/109 applicable set of ALF parameters for the current region, the video encoder can determine, based on a set of filter coefficients in one of the sets of the second plurality of arrangements corresponding to the temporal layer to which the current or corresponding region belongs to a lower temporal layer than the temporal layer to which the current region belongs and based on the set of ALF parameters in a set in the first plurality of dispositions corresponding to the temporal; layer to which the current region belongs or corresponding to a lower temporal layer than the temporal layer to which the current region belongs, applicable set of parameter ALF. [00138] In an example according to the fifth technique, a video encoder can store, in a set, ALF parameter sets used in the application From ALF filters to image samples of the decoded video data before the current image. In this example, the video encoder can determine, from the set of ALF parameters in the set whose associated time layer index indicates the time layer to which the current region belongs or a time layer smaller than the time layer to which a region belongs. current, the class fusion information and not the filter coefficients. [00139] In addition, in some examples, the video encoder can store, in a second set, sets of filter coefficients used in applying ALF filters to image samples of the decoded video data before the current image. In such examples, the video encoder can store, in the second set, Petition 870190061606, of 7/2/2019, p. 77/147 72/109 temporal layer indexes associated with sets of filter coefficients. A time layer index associated with a set of filter coefficients indicates a time layer of a region in which the set of ALF parameters was used to apply an ALF Filter. As part of determining the applicable set of ALF parameters for the current region, the video encoder can determine, based on a set of filter coefficients in one of the sets of the second plurality of dispositions corresponding to the time layer to which the current region belongs. or corresponding to a lower temporal layer than the temporal layer to which the current region belongs and based on the set of ALF parameters in a set in the first plurality of dispositions corresponding to the temporal layer to which the current region belongs or corresponding to a lower temporal layer than the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region. [00140] According to a sixth technique, instead of inheriting both the filter coefficients and the class fusion information, it is proposed that only the filter coefficients can be inherited. That is, for the current slice / image, the relationship between the class index and the filtering index can be signaled further, the temporal prediction is used. [00141] Thus, according to an example of the sixth technique, a video encoder can store, in a plurality of arrangements, sets of ALF parameters used in the application From ALF filters to image samples of the decoded video data before the image Petition 870190061606, of 7/2/2019, p. 78/147 Current 73/109. Each respective set of the plurality of sets corresponds to a respective different time layer. In this example, as part of determining an applicable set of ALF parameters for the current region, the video encoder can determine, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs or corresponding to a temporal layer lower than the temporal layer to which the current region belongs, the filter coefficients and the class fusion information. [00142] In addition, in some examples, the video encoder can store, in a second plurality of provisions, sets of class fusion information used in applying ALF filters to image samples of the decoded video data before the current image . Each respective set of the second plurality of arrangements corresponds to a respective different time layer. As part of determining the applicable set of ALF parameters for the current region, the video encoder can determine, based on a set of class fusion information in one of the sets of the second plurality of sets corresponding to the time layer to which it belongs. current region or corresponding to a lower temporal layer than the temporal layer to which the current region belongs and based on the set of ALF parameters in a set in the first plurality of dispositions corresponding to the temporal layer to which the current region or corresponding to a lower temporal layer than the temporal layer to which the current region belongs, the applicable set of Petition 870190061606, of 7/2/2019, p. 79/147 74/109 parameter ALF. [00143] According to another example of the sixth technique, a video encoder can store, in a set, ALF parameter sets used in the application of ALF filters to image samples of the decoded video data before the current image. In this example, the video encoder can determine, from the set of ALF parameters in the set whose associated time layer index indicates the time layer to which the current region belongs or a time layer smaller than the time layer to which the region belongs current, filter coefficients and class fusion information. [00144] In addition, in some examples, the video encoder can store, in a second set, sets of class fusion information used in applying ALF filters to image samples of the decoded video data before the current image. In this example, the video encoder can also store, in the second set, time layer indices associated with the sets of class fusion information. A time layer index associated with a set of filter coefficients indicates a time layer of a region in which the set of ALF parameters was used to apply an ALF Filter. As part of determining the applicable set of ALF parameters for the current region, the video encoder can determine, based on a set of filter coefficients in one of the sets of the second plurality of dispositions corresponding to the time layer to which the current region belongs or corresponding to a lower temporal layer than the temporal layer to which the Petition 870190061606, of 7/2/2019, p. 80/147 75/109 current region belongs to and based on the set of ALF parameters in a set in the first plurality of dispositions corresponding to the temporal layer to which the current region belongs or corresponding to a lower temporal layer than the temporal layer to which the current region belongs , the applicable set of ALF parameters for the current region. [00145] According to a seventh technique, even when temporal prediction is used, the differences between selected stored filters and the derived filter can also be signaled. In one example, the current design for the temporal signaling prediction that allows the signaling and indexing of a set of filters can still be used. In addition, an indicator can be used to indicate whether the sign of the filter differences or not. If so, the differences can also be signaled. In some examples, filters of frames or slices previously coded can be added and treated as part of fixed filters. In this case, the size of fixed filters and fixed filter coefficients can be changed adaptively. Alternatively, in some instances, when a set filter is added to the fixed filters, the cut must be applied to avoid duplication. [0014 6] In an example according to the seventh technique, video encoder 20 can determine, based on a selected set of ALF parameters in a set corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region. Alternatively, in this example, video encoder 20 can determine, based on a selected set of ALF parameters in the array whose Petition 870190061606, of 7/2/2019, p. 81/147 76/109 associated temporal layer index indicates the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region. In any case, the video encoder 20 may include, in the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. In some examples, the video encoder 20 may include, in the bit stream, a syntax element that indicates whether the bit stream includes the indication of the difference. [00147] In another example according to the seventh technique, the video decoder 30 can determine, based on a selected set of ALF parameters in a set corresponding to the time layer to which the current region belongs, applicable set of ALF parameters for the current region. Alternatively, in this example, video decoder 30 can determine, based on a selected set of ALF parameters in the array whose associated time layer index indicates the time layer to which the current region belongs, applicable set of ALF parameters for the current region . In any case, the video decoder 30 can obtain, from the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. In this example, as part of determining the applicable set of ALF parameters for the current region, the video decoder 30 can determine, based on the selected set of ALF parameters and the difference, the applicable set of ALF parameters for the current region. In some examples, the video decoder Petition 870190061606, of 7/2/2019, p. 82/147 77/109 video 30 can obtain, from the bit stream, a syntax element that indicates whether the bit stream includes the indication of the difference. [00148] According to a technical octave, one or more sets of ALF filters can be stored in parameter sets (for example, sequence parameter sets or image parameter sets) so that still images in different coded sequences video players can use them. To avoid problems of error resilience or random access problems, it is allowed to update the ALF filter sets in parameter sets using ALF filters flagged in partition headers. For example, when encoding a bit stream, a video encoder can store, in a set, sets of ALF filters specified in a set of bit stream parameters. In this example, partition headers can include ALF parameters that define additional ALF filters or Filter differences. A slice header is part of a coded slice (or coded slice segment) containing the data elements for the first or all of the coding tree units represented in the slice (or slice segment). [0014 9] Figure 7 is a block diagram illustrating an exemplary video encoder 20 that can implement the techniques of this description. Figure 7 is provided for explanatory purposes and should not be considered as limiting the techniques as widely exemplified and described in this report. The techniques in this description may be applicable to various standards or Petition 870190061606, of 7/2/2019, p. 83/147 78/109 coding methods. [00150] Processing circuit includes video encoder 20, and video encoder 20 is configured to perform one or more of the sample techniques described in this report. For example, video encoder 20 includes integrated circuit, and the various units illustrated in Figure 5 can be formed as hardware circuit blocks that are a circuit bus. These hardware circuit blocks can be separate circuit blocks or two or more of the units can be combined into a common hardware circuit block. Hardware circuit blocks can be formed as a combination of electrical components that form operating blocks such as arithmetic logic units (ALUs) elementary function units (EFUs), as well as logic blocks such as AND, OR, NAND, NOR, XOR , XNOR and other similar logic blocks. [00151] In some examples, one or more of the units illustrated at Figure 7 can be units in software what perform at the circuit processing. In such examples, O code in object for these units in software is stored in memory. An operating system can cause the video encoder 20 to retrieve the object code and execute the object code, which causes the video encoder 20 to perform operations to implement the sample techniques. In some instances, software units can be stored in ROM memory that video encoder 20 performs at startup. Consequently, the video encoder 20 is a structural component that has hardware that performs the techniques of Petition 870190061606, of 7/2/2019, p. 84/147 79/109 example or has a software / firmware that runs on the hardware to specialize the hardware to run the sample techniques. [00152] In the example of Figure 7, video encoder 20 includes a prediction processing unit 100, a video data memory 101, a residual generation unit 102, a transform processing unit 104, a quantization unit 106, a reverse quantization unit 108, a reverse transform processing unit 110, a reconstruction unit 112, a filter unit 114, a decoded image buffer 116 and an entropy coding unit 118. Data processing unit prediction 100 includes an interpreting processing unit 120 and an intraprognosal processing unit 126. interpreting processing unit 120 may include a motion estimation unit and a motion compensation unit (not shown). [00153] THE data memory in video 101 can be configured for store data in video The be encoded by middle of the components of encoder video 20. The data s of stored video at memory in Dice video 101 can be obtained, for example, from video source 18. Temporary decoded image storage 116 can be a reference image memory that stores reference video data for use in encoding video data by the video encoder video 20, for example, in intra- or inter-coding modes. The video data memory 101 and the Petition 870190061606, of 7/2/2019, p. 85/147 80/109 decoded image buffer 116 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetostrictive RAM (MRAM), resistive RAM ( RRAM), or other types of memory devices. Video data memory 101 and decoded image buffer 116 can be provided by the same memory device or separate memory devices. In several examples, the video data memory 101 can be on-chip with other components of the video encoder 20, or off-chip with respect to those components. The video data memory 101 can be the same or part of the storage medium 19 of Figure 1. [00154] Video encoder 20 receives video data. The video encoder 20 can encode each CTU into a slice of an image of the video data. Each of the CTUs can be associated with equally sized luma coding tree blocks (CTBs) and corresponding CTBs of the image. As part of CTU coding, the prediction processing unit 100 can partition to divide the CTU CTBs into progressively smaller blocks. The smallest blocks can be CU coding blocks. For example, the prediction processing unit 100 can divide a CTB associated with a CTU according to a tree structure. [00155] Video encoder 20 can encode CUs from a CTU to generate encoded representations of the CUs (i.e., encoded)) As part of encoding a CU, the prediction processing unit 100 may divide the Petition 870190061606, of 7/2/2019, p. 86/147 81/109 coding blocks associated with CU between one or more CUs of CU. Thus, each PU can be associated with a luma prediction block and corresponding chroma prediction blocks. The video encoder 20 and video decoder 30 can support PUs that have various sizes. As indicated above, the size of a CU can refer to the size of the CU's luma coding block and the size of a PU can refer to the size of a CU. a block in luma prediction of the PU. Assuming that O size of an Private CU is 2 Nx2N, the encoder in video 20 and O decoder of video 30 can withstand Sizes in PU 2 NxN or NxN for intra-prediction, and Sizes in PU symmetrical 2 Nxn, 2 nxn, nxn, nxn, or similar for inter prediction. Video encoder 20 and video decoder 30 can also support asymmetric partition for PU sizes of 2 Nxv, 2 NxnD, nLx2N, and nRx2N for interpretation. [00156] Interpretation processing unit 120 can generate predictive data for a PU. As part of generating the predictive data for a PU, the interpreting processing unit 120 performs interpreting on the PU. Predictive data for PU May include predictive blocks of PU and motion information for PU. Interpretation processing unit 120 can perform different operations for the PU of a CU, depending on whether the PU is in a slice I, a slice P, or a slice B. In a slice I, all PUs are intra-predicted. Therefore, if the PU is in a slice I, the interpreting processing unit 120 does not perform interpreting on the PU. Thus, for blocks encoded in Petition 870190061606, of 7/2/2019, p. 87/147 82/109 I, the predicted block is formed using spatial prediction of neighboring blocks previously coded within the same frame. If a PU is in a P slice, interpreting processing unit 120 can use unidirectional interpreting to generate a PU predictive block. If a PU is in a B slice, the interpreting processing unit 120 can use unidirectional or bidirectional interpreting to generate a PU predictive block. [00157] The intraprediction processing unit 126 can generate predictive data for a PU By performing intra-prediction in the PU. Predictive data for the PU can include predictive blocks of the PU and of various syntax elements. The intra-prediction processing unit 126 can perform the intraprediction on the slices in L, P, and B. [00158] Performing intra-prediction on a PU, the intra-prediction processing unit 126 can use multiple intra-prediction modes to generate multiple predictive data sets for the PU. Intra-prediction processing unit 126 can use sample blocks from neighboring PU samples to generate a predictive block for a PU. Neighboring PUs can be above, above and to the right, above and to the left, or to the left of the PU, assuming a left-to-right, top-to-bottom coding request for PUs, CUs and CTUs. The intra-prediction processing unit 126 can use various numbers of intra-prediction modes, for example, 33 directional intra-prediction modes. In some instances, the number of intra-prediction modes may depend on the size of the PU-associated region. Petition 870190061606, of 7/2/2019, p. 88/147 83/109 [00159] Prediction processing unit 100 can select the predictive data for PUs From a CU Among the predictive data generated by the interpredition processing unit 120 for the PUs or the predictive data generated by the intraprognostic processing unit 126 for the PUs . In some instances, the prediction processing unit 100 selects the predictive data for the WTRU CU PUs based on the rate / distortion metric of the predictive data sets. The predictive blocks of the selected predictive data can be referred to here as the selected predictive blocks. [00160] Residual generation unit 102 can generate, based on the coding blocks (for example, luma coding blocks, Cb And Cr) for a CU and the selected predictive blocks (for example, luma blocks, Cb and Cr predictive) for CU's PUs, residual blocks (for example, luma residual blocks, Cb E Cr) for CU. For example, the residual generation unit 102 can generate the residual CU blocks such that each sample in the residual blocks has a value equal to a difference between a sample in a CU coding block and a corresponding sample in a corresponding selected predictive block of a PU Da CU. [00161] Transformation processing unit 104 can divide the residual blocks of a CU into transformed blocks of CU's TUs. For example, transform processing unit 104 can perform quad-tree partition to split residual CU blocks to transform CU TU blocks. Thus, a TU can be associated with a luma transformed block and in two Petition 870190061606, of 7/2/2019, p. 89/147 84/109 chroma transform blocks. The sizes and positions of a CU's luma and chroma transform blocks may or may not be based on the sizes and positions of the CU PU prediction blocks. A quad-tree structure known as a residual quad-tree (RQT) can include nodes associated with each of the regions. The CU's TUs can correspond to RQT leaf nodes. [00162] transform processing unit 104 can generate blocks of transformation coefficient for each TU of a CU by applying one or more transforms with the transform blocks of the TU. The transform processing unit 104 can apply several transforms to a transform block associated with a TU. For example, transform processing unit 104 can apply a discrete cosine transform (DCT), a directional transform, or a transform conceptually similar to a transform block. In some examples, transform processing unit 104 does not apply transforms to a transform block. In such examples, the transform block can be treated as a transformation coefficient block. [00163] Quantization unit 106 can quantize the transform coefficients in a coefficient block. The quantization process can reduce the bit depth associated with some or all of the transform coefficients. For example, a 7-bit transform coefficient can be rounded to a m-bit transform coefficient during quantization, where n is greater than m. The unit of Petition 870190061606, of 7/2/2019, p. 90/147 85/109 quantization 106 can quantize a coefficient block associated with a CU's TU Based on a quantization parameter (QP) value associated with the CU. The video encoder 20 can adjust the degree of quantification applied to the coefficient blocks associated with a CU by adjusting the QP value associated with the CU. Quantification can introduce loss of information. Thus, the quantized transform coefficients may be less accurate than the original coefficients. [00164] Inverse quantization unit 108 and inverse transform processing unit 110 can apply inverse quantization and inverse transforms to a coefficient block, respectively, to reconstruct a residual block from the coefficient block. The reconstruction unit 112 can add samples from the reconstructed residual block to the corresponding samples from one or more predictive blocks generated by the prediction processing unit 100 to produce a reconstructed transform block associated with a TU. The reconstruction of transform blocks for each CU of a CU in this way, the video encoder 20 can reconstruct the CU coding blocks. [00165] The filter unit 114 can perform one or more unlocking operations to reduce the blocking artifacts in the coding blocks associated with a CU. The filter unit 114 can perform the filter techniques of this description. For example, filter unit 114 can store, in a plurality of sets, sets of ALF parameters used in the application From ALF filters to sample images of video data Petition 870190061606, of 7/2/2019, p. 91/147 86/109 decoded before the current image. In this example, each respective set of the plurality of sets corresponds to a respective different time layer. In addition, in this example, filter unit 114 can determine, based on a selected set of ALF parameters in one of the arrays corresponding to the time layer to which the current region belongs, the applicable set of ALF parameters for the current region. In this example, filter unit 114 can apply, based on the applicable set of ALF parameters for the current region, adaptive filtering for one or more blocks in the current region. [00166] In another example, the filter unit 114 can store, in an array (for example, array 70 in Figure 6), sets of ALF parameters used in the application From ALF filters to sample images of the decoded video data the current image. In addition, the filter unit 114 can store, in the array, the time layer indices associated with the ALF parameter sets. A time layer index associated with a set of ALF parameters indicates a time layer of a region in which the set of ALF Parameters was used to apply an ALF Filter. In addition, in this example, filter unit 114 can determine, based on a selected set of ALF parameters in the array whose associated temporal layer index indicates the temporal layer to which the current region belongs or a temporal layer lower than the temporal layer to which it belongs. which the current region belongs to, applicable set of ALF parameters for the current region. In this example, filter unit 114 can apply, based on the set Petition 870190061606, of 7/2/2019, p. 92/147 87/109 applicable ALF parameters for the current region, adaptive filtering for one or more blocks in the current region. [00167] The decoded image buffer 116 can store the reconstructed encoding blocks after the filter unit 114 performs one or more unlock operations on the reconstructed encoding blocks. The inter-prediction processing unit 120 may use a reference image containing the reconstructed coding blocks to perform interpretation on PUs of other images. In addition, the intra-prediction processing unit 126 can use reconstructed coding blocks in the decoded image buffer 116 to perform the intra-prediction on other PUs in the same image as the CU. [00168] Entropy encoding unit 118 can receive data from other functional components of video encoder 20. For example, entropy encoding unit 118 can receive coefficient blocks from quantization unit 106 and can receive syntax elements from prediction processing unit 100. Entropy encoding unit 118 can perform one or more entropy encoding operations on the data to generate entropy encoded data. For example, entropy encoding unit 118 can perform an AC operation, a variable length adaptive encoding in the context (CAVLC) operation, a variable for variable length (V2V) encoding operation, a context adaptive binary arithmetic encoding syntax-based (SB AC) operation, a Probability Interval Partition Partition (PIPE) Petition 870190061606, of 7/2/2019, p. 93/147 88/109 coding operation, an Exponential-Golomb coding operation, or other type of entropy coding operation in the data. The video encoder 20 can output a bit stream that includes entropy-encoded data generated by the entropy encoding unit 118. For example, the bit stream can include data representing transform coefficient values for a CU. [00169] Figure 8 is a block diagram illustrating an example of video decoder 30 that is configured to implement the techniques of this description. Figure 8 is provided for the purpose of explanation and is not limiting the techniques widely described and described in this report. For purposes of explanation, this description describes the video decoder 30 in the context of HEVC encoding. However, the techniques in this description may apply to other standards or coding methods. [00170] The processing circuit includes a video decoder 30, and the video decoder 30 is configured to perform one or more of the example techniques described in this report. For example, video decoder 30 includes integrated circuit, and the various units illustrated in Figure 8 can be formed as hardware circuit blocks that are a circuit bus. These hardware circuit blocks can be separate circuit blocks or two or more of the units can be combined into a common hardware circuit block. Hardware circuit blocks can be formed as a combination of electrical components that form blocks Petition 870190061606, of 7/2/2019, p. 94/147 89/109 of operation such as arithmetic logic units (ALUs), elementary function units (EFUs), as well as logic blocks such as AND, OR, NAND, NOR, XOR, XNOR, and other similar logic blocks. [00171] In some examples, one or more of the units illustrated at Figure 8 can be units in software what perform at the circuit processing. In such examples, O code in object for these units in software is stored in memory. An operating system can cause the video decoder 30 to retrieve the object code and execute the object code, which causes the video decoder 30 to perform operations to implement the sample techniques. In some instances, the software units can be stored in ROM memory that the video decoder 30 performs at startup. Consequently, the video decoder 30 is a structural component that has hardware that performs the example techniques or has software / firmware that runs on the hardware to specialize the hardware to perform the example techniques. [00172] In the example of Figure 8, the video decoder 30 includes an entropy decoding unit 150, a video data memory 151, a forecast processing unit 152, an inverse quantization unit 154, a processing unit reverse transform 156, a reconstruction unit 158, a filter unit 160, and a decoded image buffer 162. The forecast processing unit 152 includes a motion compensation unit 164 and intraprognosal processing unit 166. In others Petition 870190061606, of 7/2/2019, p. 95/147 90/109 examples, the video decoder 30 may include more, less or different functional components. [00173] The video data memory 151 can store encoded video data, such as an encoded video bit stream, to be decoded by the video decoder components 30. The video data stored in the video data memory 151 they can be obtained, for example, from the computer-readable medium 16, for example, from a local video source, such as a camera, via wired or wireless network communication of video data, or by access to the physical data storage media. The video data memory 151 can form an encoded image temporary (CPB) store that stores the encoded video data from an encoded video bit stream. The decoded image buffer 162 may be a reference image memory that stores reference video data for use in decoding video data by the video decoder 30, for example, in intra- or inter-encoding modes, or for output. The video data memory 151 and the decoded image buffer 162 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magnetostrictive RAM ( MRAM), resistive RAM (RRAM) or other types of memory devices. The video data memory 151 and the decoded image buffer 162 can be provided by the same memory device or separate memory devices. In several examples, the video data memory Petition 870190061606, of 7/2/2019, p. 96/147 91/109 151 can be on-chip with other components of the video decoder 30, or off-chip with respect to those components. The video data memory 151 can be the same or part of the storage medium 28 of Figure 1. [00174] Video data memory 151 receives and stores encoded video data (for example, External units) from a bit stream. The entropy decoding unit 150 can receive encoded video data (e.g., External units) from the video data memory 151 and can analyze the External units for syntax elements. The entropy decoding unit 150 can entropy decode entropy-encoded syntax elements in the external units. Prediction processing unit 152, inverse quantization unit 154, inverse transform processing unit 156, reconstruction unit 158, and filter unit 160 can generate decoded video data based on the syntax elements extracted from the stream bits. The entropy decoding unit 150 can perform a process generally reciprocal to that of the entropy coding unit 118. [00175] In addition to obtaining syntax elements from the bit stream, the video decoder 30 can perform a reconstruction operation in an undivided CU. To perform the reconstruction operation on a CU, the video decoder 30 can perform a reconstruction operation on each CU of the CU. By performing the reconstruction operation for each CU of the CU, the video decoder 30 can reconstruct residual CU blocks. Petition 870190061606, of 7/2/2019, p. 97/147 92/109 [00176] As part of carrying out a TU reconstruction operation of a CU, the inverse quantization unit 154 can reverse quantize, i.e., decantify, blocks of coefficients associated with the TU. After the inverse quantization unit 154 quantizes the inverse of a coefficient block, the inverse transform processing unit 156 can apply one or more inverse transforms to the coefficient block in order to generate a residual block associated with the TU. For example, the inverse transform processing unit 156 can apply an inverse DCT, inverse integer transformation, an inverse Karhunen-Loeve transform (KLT), an inverse rotary transform, an inverse directional transform, or another inverse transform for the block coefficient. [00177] Inverse quantization unit 154 can perform particular techniques of this description. For example, for at least one respective quantization group of a plurality of quantization groups, within a CTB of a CTU of an image of the video data, the reverse quantization unit 154 may derive, based at least in part on the local quantization information signaled in the bit stream, the respective quantization parameter for the respective quantization group. Additionally, in this example, the inverse quantization unit 154 can inverse quantize, based on the respective quantization parameter for the respective quantization group, at least one transformation coefficient of a transformation block of a CT of a CU of a CTU. In this example, the respective quantization group is defined as Petition 870190061606, of 7/2/2019, p. 98/147 93/109 a group of successively, in order of coding, CUs or coding blocks, so that the limits of the respective quantization group must be limits of the CUs or of the coding blocks and a size of the respective quantization group is greater or equal to a limit. Video decoder 30 (for example reverse transform processing unit 156, reconstruction unit 158 and filter unit 60) can reconstruct, based on quantized reverse transform coefficients of the transform block, a CU coding block. [00178] If a PU is encoded using an intra-prediction, the intra-prediction processing unit 166 can perform the intra-prediction to generate PU predictive blocks. The intra-prediction processing unit 166 can use an intra-prediction mode to generate the PU predictive blocks based on samples from spatially neighboring blocks. The intraprediction processing unit 166 can determine the intraprediction mode for the PU based on one or more elements of syntax obtained from the bit stream. [00179] If a PU is encoded using interpredition, the entropy decoding unit 150 can determine the motion information for the PU. The motion compensation unit 164 can determine, based on the movement information of the PU, one or more reference blocks. The motion compensation unit 164 can generate, based on one or more reference blocks, predictive blocks (for example, predictive luma, Cb and Cr blocks) for the PU. [00180] The reconstruction unit 158 can use Petition 870190061606, of 7/2/2019, p. 99/147 94/109 transform blocks (for example, luma, Cb and Cr transformation blocks) for CU's TUs and the predictive blocks (for example, luma, Cb and Cr blocks) of CU's PUs, i.e., or intra data -predictive or interpretive data, as applicable, to reconstruct the coding blocks (for example, luma, Cb and Cr coding blocks) for CU. For example, reconstruction unit 158 can add samples from the transform blocks (e.g., luma transform blocks, Cb And Cr) to the corresponding samples from the predictive blocks (e.g., predictive luma blocks, Cb and Cr) to reconstruct the coding blocks (for example, luma coding blocks, Cb And Cr) of CU. [00181] The filter unit 160 can perform an unlocking operation to reduce the blocking artifacts associated with the CU coding blocks. Filter unit 160 can perform the filter techniques of this description. For example, filter unit 160 can store, in a plurality of sets, sets of ALF parameters used in applying ALF filters to sample images of the decoded video data before a current image. In this example, each respective set of the plurality of sets corresponds to a respective different time layer. For example, for each respective set of a plurality of sets that correspond to different time layers, filter unit 160 may store, in the respective set, sets of ALF parameters used in the application From ALF filters to samples of image regions of data from video that are decoded before the current region that are Petition 870190061606, of 7/2/2019, p. 100/147 95/109 in the temporal layer corresponding to the respective set or a lower temporal layer than the temporal layer corresponding to the respective set. In this example, filter unit 160 can determine, based on a selected set of ALF parameters in the set corresponding to the time layer to which the current region belongs, the applicable set of ALF parameters for the current region. In addition, in this example, filter unit 160 can apply, based on the applicable set of ALF parameters for the current region, an ALF filter for one or more blocks in the current region. [00182] In another example, filter unit 160 stores, in an arrangement, sets of ALF parameters used in the application From ALF filters to sample images of the decoded video data before a current image. In addition, in this example, filter unit 160 stores, in the array, the time layer indices associated with the ALF parameter sets. A time layer index associated with a set of ALF parameters indicates a time layer of a region in which the set of ALF Parameters was used to apply an ALF Filter. In this example, filter unit 160 can determine, based on a selected set of ALF parameters in the array whose associated time layer index indicates the time layer to which the current region belongs, the applicable set of ALF parameters for the current region. In this example, filter unit 160 can apply, based on the applicable set of ALF parameters for the current region, an ALF filter to one or more blocks in the current region. Petition 870190061606, of 7/2/2019, p. 101/147 96/109 [00183] The video decoder 30 can store the CU encoding blocks in the decoded image buffer 162. The decoded image buffer 162 can provide reference images for subsequent motion compensation, intra-prediction and presentation on a display device, such as the display device 32 of Figure 1, for example, the video decoder 30 can perform, based on the blocks in the decoded image temporary storage 162, the intra-prediction or interpretation operations for the PUs from other CUs. [00184] Certain aspects of the present invention have been described with respect to extensions of the HEVC standard for purposes of illustration. However, the techniques described in this report can be useful for other video encoding processes, including other standard or proprietary video encoding processes not yet developed. [00185] Figure 9 is a flow chart illustrating an exemplary operation of the video encoder 20, according to the first technique of this disclosure. The flowcharts in this description are provided as examples. In other examples, actions can be performed in different orders, or operations can include more, less or different actions. [00186] In the example of Figure 9, the video encoder 20 generates a bit stream that includes a coded representation of a current image of the video data (200). A current region (for example, a slice of current or another unit) of the current image associated with a time index that indicates a time layer to which Petition 870190061606, of 7/2/2019, p. 102/147 97/109 belongs to the current region. The video encoder 20 can generate the bit stream according to any of the examples described elsewhere in this report, such as the example in Figure 5. [00187] Additionally, video encoder 20 reconstructs the current image (202). For example, video encoder 20 can reconstruct a block from the current picture frame by adding samples of reconstructed residual blocks to corresponding samples from one or more predictive blocks to produce reconstructed blocks. By reconstructing the blocks in this way, the video encoder 20 can reconstruct the encoding blocks of the current image. [00188] In addition, for each respective set of a plurality of sets corresponding to different time layers, the video encoder 20 can store, in the respective set (for example, one of the arrangements 60 in Figure 6), sets of ALF parameters used when applying ALF filters to samples of image regions of the video data that are decoded before the current region that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set (204). Each set of ALF parameters can include a set of filter coefficients and / or a set of ALF class fusion information. [00189] The video encoder 20 determines, based on a selected set of ALF parameters in one of the dispositions corresponding to the temporal layer to which the current region belongs, an applicable set of Petition 870190061606, of 7/2/2019, p. 103/147 98/109 ALF parameters for the current region (20 6). For example, video encoder 20 can select the selected set of ALF parameters using an analysis of rate distortion sets of ALF parameters in the set corresponding to the time layer to which the current region belongs. In some examples, the applicable set of ALF parameters for the current region may be the same as the selected set of ALF parameters. In some examples, the video encoder 20 may include, in the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. [00190] Video encoder 20 can then apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region (208). When applying adaptive loop filtering to the current region, the video encoder 20 can apply an ALF filter to one or more, but not necessarily, all the blocks in the current region. For example, video encoder 20 can divide the current region into blocks (for example, 4x4 blocks). In this example, for each of the blocks, the video encoder 20 can determine (for example, based on a direction and activity of the block) a corresponding category for the block. In this example, the applicable set of ALF parameters for the current region can include filter coefficients for an ALF filter of the category for the block. In this example, video encoder 20 can then apply the ALF filter to the block. [00191] The application of adaptive loop filtering to the current region, video encoder 20 uses the Petition 870190061606, of 7/2/2019, p. 104/147 99/109 current region for the prediction of a subsequent image of the video data (210). For example, video encoder 20 can use the current region for the prediction of a block of the subsequent image. [00192] Figure 10 is a flow chart illustrating an exemplary operation of the video decoder 30, according to the first technique of this disclosure. In the example of Figure 10, the video decoder 30 receives a bit stream that includes an encoded representation of a current image of the video data (250). A current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs. [00193] In addition, the video decoder 30 reconstructs the current image (252). The video decoder 30 can reconstruct the current image according to any of the examples provided elsewhere in this report. For example, video decoder 30 can reconstruct a block from the current picture frame by adding samples of reconstructed residual blocks to corresponding samples from one or more predictive blocks to produce reconstructed blocks. By reconstructing blocks in this way, the video decoder 30 can reconstruct the encoding blocks of the current image. [00194] The video decoder 30 also stores, in a plurality of sets, sets of ALF parameters used in applying ALF filters to image samples of the decoded video data before the current image (254). Each respective set of the plurality of sets corresponds to a respective different time layer. For example, for each set Petition 870190061606, of 7/2/2019, p. 105/147 100/109 of a plurality of sets corresponding to different time layers, the video decoder 30 can store, in the respective set (for example, one of the arrangements 60 in Figure 5), sets of ALF parameters used in the application of ALF filters to image samples of the video data that are decoded before the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set. [00195] Additionally, the video decoder 30 can determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region (256). For example, the video decoder 30 can obtain, from the bit stream, an index indicating the selected set of parameters ALF in the set corresponding to the time layer to which the current region belongs. In some examples, the applicable set of ALF parameters for the current region may be the same as the selected set of ALF parameters. In some examples, the video decoder 30 can obtain, from the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. [00196] The video decoder 30 can then apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region (258). Adaptive filtering application of Petition 870190061606, of 7/2/2019, p. 106/147 101/109 loop to the current region, the video decoder 30 can apply an ALF filter to one or more, but not necessarily, all the blocks in the current region. For example, the video decoder 30 can divide the current region into blocks (for example, 4x4 blocks). In this example, for each of the blocks, the video decoder 30 can determine (for example, based on a direction and activity of the block) a corresponding category for the block. In this example, the applicable set of ALF parameters for the current region can include filter coefficients for an ALF filter of the category for the block. In this example, the video decoder 30 can then apply the ALF filter to the block. [00197] Figure 11 is a flow chart illustrating an exemplary operation of the video encoder 20 according to the second technique of this description. In the example of Figure 11, video encoder 20 generates a bit stream that includes an encoded representation of a current image of the video data (300). A current region (for example, a current slice or another unit) of the current image associated with a temporal index that indicates a temporal layer to which the current region belongs. The video encoder 20 can generate the bit stream according to any of the examples described elsewhere in this report, such as the example in Figure 5 [00198] In addition, the video encoder 20 can reconstruct the current image (302 ). For example, video encoder 20 can reconstruct a block from the current picture frame, can add samples of reconstructed residual blocks to corresponding samples from one or Petition 870190061606, of 7/2/2019, p. 107/147 102/109 more predictive blocks to produce reconstructed blocks. By reconstructing the blocks in this way, the video encoder 20 can reconstruct the encoding blocks of the current image. [00199] Video encoder 20 stores, in an array (for example, array 70 in Figure 6), sets of ALF parameters used in the application From ALF filters to image samples of the decoded video data before the current image (304). Additionally, the video encoder 20 stores, in the array, time layer indexes associated with the ALF parameter sets (306). A time layer index associated with a set of ALF parameters indicates a time layer of a region in which the set of ALF Parameters was used to apply an ALF Filter. [00200] In addition, video encoder 20 determines, based on a selected set of ALF parameters in the set whose associated temporal layer index indicates the temporal layer to which the current region belongs or a temporal layer lower than the temporal layer to which belongs to the current region, applicable set of ALF parameters for the current region (308). For example, video encoder 20 can select the selected set of ALF parameters by using a rate distortion analysis of the ALF parameter sets in the set. In some examples, the applicable set of ALF parameters for the current region may be the same as the selected set of ALF parameters. In some examples, video encoder 20 may include, in the bit stream, an indication of a difference between the selected set Petition 870190061606, of 7/2/2019, p. 108/147 103/109 of ALF parameters and the applicable set of ALF parameters for the current region. [00201] The video encoder 20 applies, based on the applicable set of ALF parameters for the current region, an ALF for the current region (310). The video encoder 20 can apply the ALF filter to the current region according to any of the examples provided elsewhere in this disclosure. [00202] Process of applying the ALF filter to the current region, the video encoder 20 uses the current region for the prediction of a subsequent image of the video data (312). For example, video encoder 20 can use the current region for the prediction of a block of the subsequent image. [00203] Figure 12 is a flow chart illustrating an exemplary operation of the video decoder 30 according to a technique of this description. In the example of Figure 12, the video decoder 30 receives a bit stream that includes an encoded representation of a current image of the video data (350). A current region (for example, a current slice or another unit) of the current image associated with a temporal index that indicates a temporal layer to which the current region belongs. [00204] The video decoder 30 can then reconstruct the current image (352). The video decoder 30 can reconstruct the current image according to any of the examples provided elsewhere in this report. For example, video decoder 30 can reconstruct a block from the current picture frame by adding samples of reconstructed residual blocks to samples corresponding to Petition 870190061606, of 7/2/2019, p. 109/147 104/109 from one or more predictive blocks to produce reconstructed blocks. By reconstructing blocks in this way, the video decoder 30 can reconstruct the encoding blocks of the current image. [00205] In the example of Figure 12, the video decoder 30 stores, in an arrangement, sets of ALF parameters used in the application From ALF filters to image samples of the decoded video data before the current image (354). In addition, the video decoder 30 stores, in the array, the time layer indexes associated with the ALF parameter sets (356). A time layer index associated with a set of ALF parameters indicates a time layer of a region in which the set of ALF Parameters was used to apply an ALF Filter. [00206] Video decoder 30 can determine, based on a selected set of ALF parameters in the set whose associated time layer index indicates the time layer to which the current region belongs, an applicable set of ALF parameters for the current region (358 ). For example, the video decoder 30 can obtain, from the bit stream, an index indicating the selected set of ALF parameters in the set. In some examples, the applicable set of ALF parameters for the current region may be the same as the selected set of ALF parameters. In some examples, the video decoder 30 can obtain, from the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. Petition 870190061606, of 7/2/2019, p. 110/147 105/109 [00207] The video decoder 30 then applies, based on the applicable set of ALF parameters for the current region, an ALF filter for the current region (360). The video decoder 30 can apply the ALF filter to the current region according to any of the examples provided elsewhere in this disclosure. [00208] A video encoder, as described in this report, can refer to a video encoder or a video decoder. Similarly, a video encoding unit can refer to a video encoder or a video decoder. Likewise, video encoding can refer to video encoding or video decoding, as applicable. In this description, the phrase based on may indicate based only on, based at least in part, or based on some way. This description can use the term video unit or video block or block to refer to one or more sample blocks and syntax structures used to encode samples from one or more sample blocks. Types of video units can include CTUs, CUs, PUs, transform units (TUs), macroblocks, macroblock partitions and so on. In some contexts, the discussion of PUs can be interchanged with discussion of macroblocks or macroblock partitions. Types of video blocks can include encoding tree blocks, encoding blocks and other types of video data blocks. [00209] The techniques of this description can be applied to video encoding in support of any of a variety of multimedia applications, such as Petition 870190061606, of 7/2/2019, p. 111/147 106/109 television broadcasts over the air, cable television broadcasts, satellite television broadcasts, Internet streaming video broadcasts, such as dynamic adaptive streaming over HTTP (DASH), digital video that is encoded in a data storage medium, decoding digital video stored on a data storage medium, or other applications. [00210] It must be recognized that depending on the example, certain acts or events of any of the techniques described here can be performed in a different sequence, can be added, joined or left completely (for example, not all the acts or events described are necessary for the practice of techniques). In addition, in certain examples, acts or events can be performed simultaneously, for example, through multi-threaded processing, interrupt processing, or multiple processors, instead of sequentially. [00211] In one or more examples, the functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, functions can be stored or transmitted, as one or more instructions or codes, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which correspond to a tangible medium such as data storage media, or media including any media that facilitates Petition 870190061606, of 7/2/2019, p. 112/147 107/109 transfer of a computer program from one place to another, for example, according to a communication protocol. Thus, the computer-readable medium can generally correspond to (1) tangible computer-readable storage medium that is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media can be any available media that can be accessed by one or more computers or one or more processing circuits to retrieve instructions, code and / or data structures for implementing the techniques described in this document. Computer program product may include computer readable medium. [00212] By way of example, and not by way of limitation, such computer-readable storage media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other storage devices magnetic, instant memory, or any other means that can be used to store the desired program code in the form of instructions or data structures and which can be accessed by a computer. Also, any connection is appropriately called a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio and microwave, then coaxial cable, fiber optic cable, twisted pair, DSL or wireless technologies such as infrared, radio, and Petition 870190061606, of 7/2/2019, p. 113/147 108/109 microwaves are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals or other transient media, tangible, non-transient storage media. Disc and disc, as used herein, include compact disc (CD) laser disc, optical disc, digital versatile disc (DVD) floppy disc and Blu-ray disc, where discs usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included in the scope of computer-readable media. [00213] Functionality described in this specification can be performed by fixed function and / or programmable processing circuit. For example, instructions can be executed by fixed function and / or programmable processing circuit. Such processing circuits may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrangements (FPGAs), integrated logic circuit or discreet equivalent or other equivalent. Consequently, the term processor, as used herein, refers to any of the above structures or any other structure suitable for implementing the techniques described herein. In addition, in some respects, the functionality described here can be provided within dedicated hardware and / or software modules configured for encoding and decoding, or embedded in a Petition 870190061606, of 7/2/2019, p. 114/147 109/109 combined codec. Also, the techniques can be fully implemented in one or more circuits or logic elements. The processing circuits can be coupled to other components in several ways. For example, a processing circuit can be coupled to other components through an internal device interconnection, wired or wireless network connection, or other means of communication. [00214] The techniques of this description can be implemented in a wide variety of devices or devices, including a wireless device, integrated circuit (IC) or a set of ICs (for example, a chip set). Various components, modules or units are described in this description to emphasize functional aspects of devices configured to perform the described techniques, but do not necessarily require realization by different hardware units. More properly, as described above, several units can be combined into one hardware codec unit or provided by a collection of interoperable hardware units, including one or more processors as described above, together with suitable software and / or firmware. [00215] Several examples have been described. These and other examples are within the scope of the following claims.
权利要求:
Claims (18) [1] 1. Video data decoding method, the method comprising: receiving a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a time index that indicates a temporal layer to which the current region belongs; reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of circuit adaptive filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before the current region and are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region; and apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. [2] 2. Method according to claim 1, in which storing the ALF parameter sets comprises: for each respective set of the plurality of sets, storage, in the respective set, sets of Petition 870190061606, of 7/2/2019, p. 116/147 2/18 ALF parameters used in the application of ALF filters to samples of the image regions of the decoded video data before the current current region that belongs to the temporal layer corresponding to the respective set and which belongs to temporal layers lower than the temporal layer corresponding to the respective set . [3] A method according to claim 1, wherein at least two of the plurality of sets include different numbers of sets of ALF parameters. [4] A method according to claim 1, further comprising: storage, in at least one of the sets corresponding to the temporal layer to which the current region belongs or the provisions of the plurality of sets corresponding to temporal layers higher than the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region if the applicable set of ALF parameters for the current region has not yet been stored in the array. [5] 5. Method according to claim 4, in which storing the applicable set of ALF parameters for the current region comprises determining, based on differences between a POC value from the current image and POC values associated with sets of ALF parameters, whose set of ALF parameters in the set corresponding to the time layer to which the current region belongs to replace the applicable set of ALF Parameters for the current region. [6] 6. Method according to claim 1, in which it is necessary that, when determining the applicable set of ALF parameters for the current region, a POC value associated with the applicable set of ALF parameters Petition 870190061606, of 7/2/2019, p. 117/147 3/18 for the current region is equal to a POC Value for a reference image in a reference image list for the current image. [7] 7. Method according to claim 1, further comprising: obtaining, from the bit stream, a syntax element that indicates an index of the selected set of ALF parameters, determining the applicable set of ALF parameters for the current region comprises determining, based on the syntax element, the selected set of ALF parameters, and where a syntax element format depends on a time index. [8] 8. Method according to claim 1, in which determining the applicable set of ALF parameters for the current region comprises determining, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, class fusion information and no filter coefficients. [9] 9. Method according to claim 1, in which determining the applicable set of ALF parameters for the current region comprises determining, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, filter coefficients and information class fusion. [10] 10. The method of claim 1, further comprising: obtain, from the bit stream, an indication of a difference between the selected bit set in Petition 870190061606, of 7/2/2019, p. 118/147 4/18 ALF parameters and the applicable set of ALF parameters for the current region, where determining the applicable set of ALF parameters for the current region comprises determining, based on the selected set of ALF and difference, set applicable in ALF parameters for current region.11. Method data encoding of video , O method comprising: generate one bit stream that includes an encoded representation of a current image of the data in video, in which a current region of the current image is associated with a temporal index that indicates a temporal layer to which the current region belongs; reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of circuit adaptive filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before the current region and are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in one of the arrangements corresponding to the temporal layer to which the current region belongs, an applicable set of ALF parameters for the current region; apply, based on the applicable set of ALF parameters for the current region, adaptive filtering of Petition 870190061606, of 7/2/2019, p. 119/147 5/18 loop for the current region; and after applying adaptive loop filtering to the current region, use the current region to predict a subsequent image of the video data. 12. The method of claim 11, wherein storing the ALF parameter sets comprises: for each respective set of the plurality of sets, storing, in the respective set, ALF parameter sets used in applying ALF filters to samples from the regions of the video data images decoded before the current current region that belongs to the temporal layer corresponding to the respective set and which belongs to temporal layers inferior to the temporal layer corresponding to the respective set. 13. The method of claim 11, wherein at least two of the plurality of sets include different numbers of sets of ALF parameters. 14. The method of claim 11, further comprising: storage, in at least one of the sets corresponding to the temporal layer to which the current region belongs or the arrangements of the plurality of sets corresponding to temporal layers higher than the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region if the applicable set of ALF parameters for the current region has not yet been stored in the array. 15. Method according to claim 14, in which storing the applicable set of ALF parameters for the current region comprises determining, based on differences between a POC value from the current image and POC values Petition 870190061606, of 7/2/2019, p. 120/147 6/18 associated with sets of ALF parameters, whose set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs to replace the applicable set of ALF parameters for the current region. 16. Method according to claim 11, wherein, when determining the applicable set of ALF parameters for the current region, a POC value associated with the applicable set of ALF parameters for the current region is equal to a POC value for a reference image in a reference image list for the current image. 17. The method of claim 11, further comprising including, in the bit stream, a syntax element that indicates an index of the selected set of ALF parameters. 18. Method according to claim 11, in which determining the applicable set of ALF parameters for the current region comprises determining, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, class fusion information and no filter coefficients. 19. Method according to claim 11, in which determining the applicable set of ALF parameters for the current region comprises determining, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, filter and information coefficients class fusion. 20. The method of claim 11, further comprising: include, in the bit stream, an indication of a Petition 870190061606, of 7/2/2019, p. 121/147 7/18 difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. 21. Device for decoding video data, the device comprising: One or more storage means configured to store the video data; and One or more processors configured to: receiving a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a temporal index indicating a temporal layer to which the current region belongs; reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of circuit adaptive filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before the current region and are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region; and apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. Petition 870190061606, of 7/2/2019, p. 122/147 8/18 22. The device of claim 21, wherein one or more processors are configured in such a way that, as part of the storage of the ALF parameter sets, one or more processors: for each respective set of the plurality of sets, storage in the respective sets of ALF parameter sets used in applying ALF filters to samples of the image regions of the decoded video data before the current region of the current image that belongs to the corresponding temporal layer and that belong to temporal layers lower than the temporal layer corresponding to the respective set. 23. Device according to claim 21, wherein at least two of the plurality of sets include different numbers of sets of ALF parameters. 24. Device according to claim 21, in which one or more processors are still configured for storage, in at least one of the sets corresponding to the temporal layer to which the current region belongs or the sets of the plurality of sets corresponding to higher temporal layers than the temporal layer to which belongs to the current region, the applicable set of ALF parameters for the current region if the applicable set of ALF parameters for the current region has not yet been stored in the array. 25. Device according to claim 24, where one or more processors are configured in such a way that, as part of storing the applicable set of ALF parameters for the current region, one or more Petition 870190061606, of 7/2/2019, p. 123/147 9/18 processors determine, based on differences between a POC value from the current image and POC values associated with ALF parameter sets, whose ALF parameter set in the set corresponding to the time layer to which the current region belongs to replace the set applicable ALF parameters for the current region. 26. Device, according to claim 21, in which, when determining the applicable set of ALF parameters for the current region, a POC value associated with the applicable set of ALF parameters for the current region is equal to a POC value for a reference image in a reference image list for the current image. 27. The device of claim 21, wherein one or more processors are additionally configured to: obtain, from the bit stream, a syntax element that indicates an index of the selected set of ALF parameters, in which one or more processors are configured in such a way that, as part of determining the applicable set of ALF parameters for the region current, the one or more processors determine, based on the syntax element, the selected set of ALF parameters, and in which a format of the syntax element depends on a time index. 28. Device according to claim 21, in which one or more processors are configured in such a way that, as part of determining the applicable set of ALF parameters for the current region, the one or more Petition 870190061606, of 7/2/2019, p. 124/147 10/18 processors determine, from the set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, class fusion information and no filter coefficients. 29. Device according to claim 21, wherein one or more processors are configured in such a way that, as part of determining the applicable ALF parameter set for the current region, the one or more processors determine, from the parameter set ALF in the set corresponding to the temporal layer to which the current region belongs, filter coefficients and class fusion information. 30. The device of claim 21, wherein one or more processors are additionally configured to: obtain, from the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region, in which one or more processors are configured in such a way that, as part of determining the applicable set of ALF parameters for the current region, one or more processors determine, based on the selected set of ALF parameters and the difference, the applicable set of ALF parameters for the current region. 31. The device of claim 21, wherein the device comprises a wireless communication device, further comprising a receiver configured to receive encoded video data. 32. Device according to claim Petition 870190061606, of 7/2/2019, p. 125/147 [11] 11/18 31, in which the wireless communication device comprises a telephone apparatus and in which the receiver is configured to demodulate, according to a wireless communication standard, a signal comprising the encoded video data. 33. Device for encoding video data, the device comprising: One or more storage media configured to store video data; and One or more processors configured to: Generate a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a time index that indicates a temporal layer to which the current region belongs; Rebuild the current image; For each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of circuit adaptive filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that they are decoded before the current region and are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; Determine, based on a selected set of ALF parameters in one of the arrangements corresponding to the temporal layer to which the current region belongs, an applicable set of ALF parameters for the current region; apply, based on the applicable set of Petition 870190061606, of 7/2/2019, p. 126/147 [12] 12/18 ALF parameters for the current region, adaptive loop filtering for the current region; and after applying adaptive loop filtering to the current region, use the current region to predict a subsequent image of the video data. 34. Device according to claim 33, in which one or more processors are configured in such a way that, as part of storing the ALF parameter sets, one or more processors: for each respective set of the plurality of sets, storage, in the respective set, sets of ALF parameters used in the application of ALF filters to samples of the regions of the images of the decoded video data before the current current region that belongs to the time layer corresponding to the respective set and which belongs to temporal layers inferior to the corresponding temporal layer. 35. Device according to claim 33, wherein at least two of the plurality of sets include different numbers of sets of ALF parameters. 36. Device according to claim 33, in which one or more processors are additionally configured to: store, in at least one of the sets corresponding to the temporal layer to which the current region belongs or the sets of the plurality of sets corresponding to temporal layers higher than the temporal layer to which which the current region belongs to, the applicable set of ALF parameters for the current region if the applicable set of ALF parameters for the current region Petition 870190061606, of 7/2/2019, p. 127/147 [13] 13/18 has not yet been stored in the arrangement. 37. Device according to claim 36, in which one or more processors are configured in such a way that, as part of storing the applicable set of ALF parameters for the current region, one or more processors determine, based on differences between a POC value from the current image and values of POCs associated with sets of ALF parameters, whose set of ALF parameters in the set corresponding to the time layer to which the current region belongs to replace the applicable set of ALF Parameters for the current region. 38. Device according to claim 33, in which it is necessary that, when determining the applicable set of ALF parameters for the current region, a POC value associated with the applicable set of ALF parameters for the current region is equal to a POC value of a reference image in a reference image list for the current image. 39. Device according to claim 33, in which one or more processors are further configured to include, in the bit stream, a syntax element that indicates an index of the selected set of ALF parameters. 40. Device according to claim 33, in which one or more processors are configured in such a way that, as part of determining the applicable set of ALF parameters for the current region, the one or more processors determine, from the set of ALF parameters in the set corresponding to the time layer to which belongs to the current region, class fusion information and no filter coefficients. Petition 870190061606, of 7/2/2019, p. 128/147 [14] 14/18 41. Device according to claim 33, in which one or more processors are configured in such a way that, as part of determining the applicable set of ALF parameters for the current region, the one or more processors determine, from the set of ALF parameters in the set corresponding to the time layer to which belongs to the current region, filter coefficients and class fusion information. 42. Device according to claim 33, where one or more processors are additionally configured to: Include, in the bit stream, an indication of a difference between the selected set of ALF parameters and the applicable set of ALF parameters for the current region. 43. Device according to claim 33, wherein the device comprises a wireless communication device, further comprising a transmitter configured to transmit encoded video data. 44. Device according to claim 43, in which the wireless communication device comprises a telephone apparatus and in which the transmitter is configured to modulate, according to a wireless communication standard, a signal comprising the encoded video data. 45. Device for decoding video data, the device comprising: Means for receiving a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a time index that indicates a time layer to which the current region belongs; Petition 870190061606, of 7/2/2019, p. 129/147 [15] 15/18 Means to reconstruct the current image; For each respective set of a plurality of sets corresponding to different time layers, means for storage, in the respective set, sets of circuit adaptive filtering parameters (ALF) used in the application of ALF filters to samples of image regions of the data of video that are decoded before the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; Means for determining, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, an applicable set of ALF parameters for the current region; and Means for application, based on the applicable set of ALF parameters for the current adaptive filtering region for the current region. 46. Device for encoding video data, the device comprising: Means for the generation of a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a temporal index indicating a temporal layer to which the current region belongs; Means to reconstruct the current image For each respective set of a plurality of sets corresponding to different time layers, means for storage, in the respective set, sets of adaptive filtering parameters Petition 870190061606, of 7/2/2019, p. 130/147 [16] 16/18 circuit (ALF) used in the application of ALF filters to samples of image regions of video data that are decoded before the current region and that are in the corresponding temporal layer or a lower temporal layer than the temporal layer corresponding to the respective set; Means to determine, based on a selected set of ALF parameters in one of the set AMT parameters corresponding to the temporal layer to which the current region belongs, applicable set of ALF parameters for the current region; Means for application, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region; and Means for use, after applying adaptive loop filtering to the current region, the current region for predicting a subsequent image of the video data. 47. Computer-readable data storage medium that stores instructions that, when executed, cause one or more processors to: receiving a bit stream that includes a coded representation of a current image of video data, in which a current region of the current image is associated with a temporal index indicating a temporal layer to which the current region belongs; reconstruct the current image for each respective set of a plurality of sets that correspond to different temporal layers, storage, in the respective set, sets of circuit adaptive filtering parameters (ALF) Petition 870190061606, of 7/2/2019, p. 131/147 [17] 17/18 used in the application of ALF filters to samples of image regions of the video data that are decoded before the current region and that are in the time layer corresponding to the respective set or a time layer lower than the time layer corresponding to the respective set; determine, based on a selected set of ALF parameters in the set corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region; and apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region. 48. Computer-readable storage medium that stores instructions that, when executed, cause one or more processors to: generating a bit stream that includes a coded representation of a current image of the video data, in which a current region of the current image is associated with a temporal index indicating a temporal layer to which the current region belongs; reconstruct the current image; for each respective set of a plurality of sets corresponding to different time layers, storage, in the respective set, sets of circuit adaptive filtering (ALF) parameters used in applying ALF filters to samples of image regions of the video data that are decoded before the current region and are in the time layer corresponding to the respective set or a layer Petition 870190061606, of 7/2/2019, p. 132/147 [18] 18/18 lower than the temporal layer corresponding to the respective set; determine, based on a selected set of ALF parameters in one of the sets corresponding to the temporal layer to which the current region belongs, the applicable set of ALF parameters for the current region; apply, based on the applicable set of ALF parameters for the current region, adaptive loop filtering for the current region; and after applying adaptive loop filtering to the current region, use the current region to predict a subsequent image of the video data.
类似技术:
公开号 | 公开日 | 专利标题 BR112019013705A2|2020-04-28|temporal prediction of modified adaptive loop filter to support time scalability KR102048169B1|2019-11-22|Modification of transform coefficients for non-square transform units in video coding US10506246B2|2019-12-10|Multi-type-tree framework for video coding CN110073661B|2021-09-14|Method and apparatus for encoding and decoding video data ES2727635T3|2019-10-17|Determination of palettes in palette-based video coding JP2019519982A|2019-07-11|Confusion of Multiple Filters in Adaptive Loop Filter Processing in Video Coding ES2650795T3|2018-01-22|Coding of coded block indicators JP5805849B2|2015-11-10|Motion vector prediction in video coding. ES2744235T3|2020-02-24|Parameter set updates in video encoding BR112019011883A2|2019-10-22|sample model linear access prediction mode for video coding CN110324619B|2021-09-03|Transform information prediction JP6700193B2|2020-05-27|Conformance window information in multi-layer coding JP2020503815A|2020-01-30|Intra prediction techniques for video coding ES2744201T3|2020-02-24|Device and procedure for scalable encoding of video information WO2016049440A1|2016-03-31|Parsing dependency reduction for palette index coding KR20160034926A|2016-03-30|Device and method for scalable coding of video information IL277064D0|2020-10-29|Position dependent intra prediction combination extended with angular modes CN113545042A|2021-10-22|Merge list construction in triangulation CN112514393A|2021-03-16|Sub-block transformations CN114097246A|2022-02-25|Adaptive loop filtering across raster scan slices KR20200124275A|2020-11-02|Concatenated coding units in a flexible tree structure CN113557724A|2021-10-26|Video coding and decoding method and device CN110708553A|2020-01-17|Video encoding and decoding method, computer equipment and storage device TWI745522B|2021-11-11|Modified adaptive loop filter temporal prediction for temporal scalability support BR112021012632A2|2021-09-08|VIDEO ENCODER, VIDEO DECODER AND CORRESPONDING METHODS
同族专利:
公开号 | 公开日 US20200068196A1|2020-02-27| TW201830965A|2018-08-16| US20180192050A1|2018-07-05| CN110024401B|2021-12-07| US10506230B2|2019-12-10| WO2018129168A1|2018-07-12| JP2020503801A|2020-01-30| KR20190102201A|2019-09-03| EP3566444A1|2019-11-13| CO2019007119A2|2019-10-09| CL2019001805A1|2019-11-29| CA3042287A1|2018-07-12| AU2018205779A1|2019-05-23| CN110024401A|2019-07-16| US10855985B2|2020-12-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 EP2080275B1|2006-10-16|2019-03-20|Vidyo, Inc.|Method for signaling and performing temporal level switching in scalable video coding| WO2009037803A1|2007-09-20|2009-03-26|Panasonic Corporation|Image denoising device, image denoising method, and image denoising program| EP2048886A1|2007-10-11|2009-04-15|Panasonic Corporation|Coding of adaptive interpolation filter coefficients| US20090187960A1|2008-01-17|2009-07-23|Joon Hui Lee|IPTV receiving system and data processing method| WO2009133844A1|2008-04-30|2009-11-05|株式会社 東芝|Video encoding and decoding method and device equipped with edge-referenced filtering function| US20100329361A1|2009-06-30|2010-12-30|Samsung Electronics Co., Ltd.|Apparatus and method for in-loop filtering of image data and apparatus for encoding/decoding image data using the same| US20120243611A1|2009-12-22|2012-09-27|Sony Corporation|Image processing apparatus and method as well as program| EP2988503A1|2010-01-14|2016-02-24|Dolby Laboratories Licensing Corporation|Buffered adaptive filters| EP2584781A4|2010-06-17|2016-03-09|Sharp Kk|Image filter device, decoding apparatus, encoding apparatus, and data structure| US9154807B2|2010-06-25|2015-10-06|Qualcomm Incorporated|Inclusion of switched interpolation filter coefficients in a compressed bit-stream| KR20120016991A|2010-08-17|2012-02-27|오수미|Inter prediction process| KR102062821B1|2010-09-29|2020-01-07|한국전자통신연구원|Method and apparatus for image encoding/decoding using prediction of filter information| GB2488830B|2011-03-10|2015-07-29|Canon Kk|Method and device for encoding image data and method and device for decoding image data| US20120230423A1|2011-03-10|2012-09-13|Esenlik Semih|Line memory reduction for video coding and decoding| EP3057326A1|2011-06-10|2016-08-17|MediaTek, Inc|Method and apparatus of scalable video coding| US9277228B2|2011-07-18|2016-03-01|Qualcomm Incorporated|Adaptation parameter sets for video coding| US20130022113A1|2011-07-22|2013-01-24|Qualcomm Incorporated|Slice header prediction for depth maps in three-dimensional video codecs| US20130113880A1|2011-11-08|2013-05-09|Jie Zhao|High Efficiency Video Coding Adaptive Loop Filter| US9451252B2|2012-01-14|2016-09-20|Qualcomm Incorporated|Coding parameter sets and NAL unit headers for video coding| PT2822276T|2012-02-29|2018-12-18|Lg Electronics Inc|Inter-layer prediction method and apparatus using same| CA2870067C|2012-04-16|2017-01-17|Nokia Corporation|Video coding and decoding using multiple parameter sets which are identified in video unit headers| US20130287093A1|2012-04-25|2013-10-31|Nokia Corporation|Method and apparatus for video coding| US9584805B2|2012-06-08|2017-02-28|Qualcomm Incorporated|Prediction mode information downsampling in enhanced layer coding| WO2013187698A1|2012-06-12|2013-12-19|엘지전자 주식회사|Image decoding method and apparatus using same| WO2014006266A1|2012-07-02|2014-01-09|Nokia Corporation|Method and apparatus for video coding| KR101806101B1|2012-09-27|2017-12-07|돌비 레버러토리즈 라이쎈싱 코오포레이션|Inter-layer reference picture processing for coding standard scalability| EP2904797B1|2012-10-01|2021-08-11|Nokia Technologies Oy|Method and apparatus for scalable video coding| US20140092953A1|2012-10-02|2014-04-03|Sharp Laboratories Of America, Inc.|Method for signaling a step-wise temporal sub-layer access sample| US9674519B2|2012-11-09|2017-06-06|Qualcomm Incorporated|MPEG frame compatible video coding| US10477232B2|2014-03-21|2019-11-12|Qualcomm Incorporated|Search region determination for intra block copy in video coding| US10057574B2|2015-02-11|2018-08-21|Qualcomm Incorporated|Coding tree unit level adaptive loop filter | US20170238020A1|2016-02-15|2017-08-17|Qualcomm Incorporated|Geometric transforms for filters for video coding| WO2018122092A1|2016-12-30|2018-07-05|Telefonaktiebolaget Lm Ericsson |Methods, apparatus, and computer programs for decoding media| US10506230B2|2017-01-04|2019-12-10|Qualcomm Incorporated|Modified adaptive loop filter temporal prediction for temporal scalability support|US10506230B2|2017-01-04|2019-12-10|Qualcomm Incorporated|Modified adaptive loop filter temporal prediction for temporal scalability support| US20190313108A1|2018-04-05|2019-10-10|Qualcomm Incorporated|Non-square blocks in video coding| KR20210055043A|2018-09-12|2021-05-14|퀄컴 인코포레이티드|Time prediction of adaptive loop filter parameters with reduced memory consumption for video coding| US11051017B2|2018-12-20|2021-06-29|Qualcomm Incorporated|Adaptive loop filterindex signaling| GB201902829D0|2019-03-01|2019-04-17|Canon Kk|Method and apparatus for encoding and decoding a video bitsream for merging regions of interest| GB2582029A|2019-03-08|2020-09-09|Canon Kk|An adaptive loop filter| CN113632469A|2019-03-23|2021-11-09|北京字节跳动网络技术有限公司|Default in-loop shaping parameters| US20200314424A1|2019-03-26|2020-10-01|Qualcomm Incorporated|Block-based adaptive loop filterwith adaptive parameter setin video coding| KR20210145748A|2019-04-15|2021-12-02|베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드|Temporal prediction of parameters in nonlinear adaptive loop filtering| US11202068B2|2019-09-16|2021-12-14|Mediatek Inc.|Method and apparatus of constrained cross-component adaptive loop filtering for video coding| US20210176501A1|2019-12-05|2021-06-10|Mediatek Inc.|Methods and Apparatuses of Syntax Signaling Constraint for Cross-Component Adaptive Loop Filter in Video Coding System| KR20210118768A|2020-03-23|2021-10-01|주식회사 케이티|Method of processing a video and device therefor|
法律状态:
2021-10-19| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762442322P| true| 2017-01-04|2017-01-04| US201762445174P| true| 2017-01-11|2017-01-11| US15/861,165|US10506230B2|2017-01-04|2018-01-03|Modified adaptive loop filter temporal prediction for temporal scalability support| PCT/US2018/012355|WO2018129168A1|2017-01-04|2018-01-04|Modified adaptive loop filter temporal prediction for temporal scalability support| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|